0% found this document useful (0 votes)
81 views32 pages

Educational Research

The document discusses the importance of validity and reliability in research tool development. It defines validity as measuring what a tool claims to measure, and reliability as the consistency of measurement. Specifically, it covers four types of reliability (test-retest, interrater, parallel forms, and internal consistency) and how each is measured. It also defines internal and external validity, face validity, and how validity and reliability help ensure research tools accurately assess the intended constructs.

Uploaded by

warda wisaal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views32 pages

Educational Research

The document discusses the importance of validity and reliability in research tool development. It defines validity as measuring what a tool claims to measure, and reliability as the consistency of measurement. Specifically, it covers four types of reliability (test-retest, interrater, parallel forms, and internal consistency) and how each is measured. It also defines internal and external validity, face validity, and how validity and reliability help ensure research tools accurately assess the intended constructs.

Uploaded by

warda wisaal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Educational Research

Submitted By: Warda Ahmed


Submitted To: Sir Qayyum Nawaz

Roll No: CF503549

Program: PGD (ELM)

Semester: Spring 2021

Course Code: 1649


Assignment 2
Department of Educational Planning, Policy Studies and Leadership

ALLAMA IQBAL OPEN UNIVERSITY, ISLAMABAD


Assignment no. 2

Q.1 Define and discuss validity & reliability. To what extent these are helpful in
developing and finalizing the research tool in a research

When you do quantitative research, you have to consider the reliability and validity of
your research methods and instruments of measurement. Reliability tells you how
consistently a method measures something. When you apply the same method to the
same sample under the same conditions, you should get the same results. If not, the
method of measurement may be unreliable. There are four main types of reliability. Each
can be estimated by comparing different sets of results produced by the same method.

Test-retest reliability

Test-retest reliability measures the consistency of results when you repeat the same test on
the same sample at a different point in time. You use it when you are measuring something
that you expect to stay constant in your sample.

Why it’s important

Many factors can influence your results at different points in time: for example, respondents
might experience different moods, or external conditions might affect their ability to
respond accurately. Test-retest reliability can be used to assess how well a method resists
these factors over time. The smaller the difference between the two sets of results, the
higher the test-retest reliability.

How to measure it

To measure test-retest reliability, you conduct the same test on the same group of people at
two different points in time. Then you calculate the correlation between the two sets of
results.

Improving test-retest reliability


• When designing tests or questionnaires, try to formulate questions, statements and
tasks in a way that won’t be influenced by the mood or concentration of
participants.
• When planning your methods of data collection, try to minimize the influence of
external factors, and make sure all samples are tested under the same conditions.
• Remember that changes can be expected to occur in the participants over time, and
take these into account.

Interrater reliability

Interrater reliability (also called interobserver reliability) measures the degree of agreement
between different people observing or assessing the same thing. You use it when data is
collected by researchers assigning ratings, scores or categories to one or more variables.

Why it’s important

People are subjective, so different observers’ perceptions of situations and phenomena


naturally differ. Reliable research aims to minimize subjectivity as much as possible so that a
different researcher could replicate the same results. When designing the scale and criteria
for data collection, it’s important to make sure that different people will rate the same
variable consistently with minimal bias. This is especially important when there are multiple
researchers involved in data collection or analysis.

How to measure it

To measure interrater reliability, different researchers conduct the same measurement or


observation on the same sample. Then you calculate the correlation between their different
sets of results. If all the researchers give similar ratings, the test has high interrater
reliability.

Improving interrater reliability

• Clearly define your variables and the methods that will be used to measure them.
• Develop detailed, objective criteria for how the variables will be rated, counted or
categorized.
• If multiple researchers are involved, ensure that they all have exactly the same
information and training.

Parallel forms reliability

Parallel forms reliability measures the correlation between two equivalent versions of a test.
You use it when you have two different assessment tools or sets of questions designed
to measure the same thing.

Why it’s important

If you want to use multiple different versions of a test (for example, to avoid respondents
repeating the same answers from memory), you first need to make sure that all the sets of
questions or measurements give reliable results.

How to measure it

The most common way to measure parallel forms reliability is to produce a large set of
questions to evaluate the same thing, then divide these randomly into two question sets.
The same group of respondents answers both sets, and you calculate the correlation
between the results. High correlation between the two indicates high parallel forms
reliability.

Improving parallel forms reliability

• Ensure that all questions or test items are based on the same theory and formulated
to measure the same thing.

Internal consistency

Internal consistency assesses the correlation between multiple items in a test that are
intended to measure the same construct.

You can calculate internal consistency without repeating the test or involving other
researchers, so it’s a good way of assessing reliability when you only have one data set.

Why it’s important


When you devise a set of questions or ratings that will be combined into an overall score,
you have to make sure that all of the items really do reflect the same thing. If responses to
different items contradict one another, the test might be unreliable.

How to measure it

Two common methods are used to measure internal consistency.

Average inter-item correlation: For a set of measures designed to assess the same
construct, you calculate the correlation between the results of all possible pairs of items and
then calculate the average.

Split-half reliability: You randomly split a set of measures into two sets. After testing the
entire set on the respondents, you calculate the correlation between the two sets of
responses.

Improving internal consistency

• Take care when devising questions or measures: those intended to reflect the same
concept should be based on the same theory and carefully formulated.

Which type of reliability applies to my research?

It’s important to consider reliability when planning your research design, collecting and
analyzing your data, and writing up your research. The type of reliability you should
calculate depends on the type of research and your methodology.

Validity

The concept of validity was formulated by Kelly (1927, p. 14) who stated that a test is valid if
it measures what it claims to measure. For example a test of intelligence should measure
intelligence and not something else (such as memory). A distinction can be made between
internal and external validity. These types of validity are relevant to evaluating the validity
of a research study / procedure.

What is internal and external validity in research?


Internal validity refers to whether the effects observed in a study are due to the
manipulation of the independent variable and not some other factor. In-other-words there
is a causal relationship between the independent and dependent variable. Internal validity
can be improved by controlling extraneous variables, using standardized instructions,
counter balancing, and eliminating demand characteristics and investigator effects.

External validity refers to the extent to which the results of a study can be generalized to
other settings (ecological validity), other people (population validity) and over time
(historical validity). External validity can be improved by setting experiments in a more
natural setting and using random sampling to select participants.

What is face validity in research?


Face validity is simply whether the test appears (at face value) to measure what it claims to.
This is the least sophisticated measure of validity. Tests wherein the purpose is clear, even
to naïve respondents, are said to have high face validity. Accordingly, tests wherein the
purpose is unclear have low face validity (Nevo, 1985).

A direct measurement of face validity is obtained by asking people to rate the validity of a
test as it appears to them. This rater could use a likert scale to assess face validity. For
example:

1. the test is extremely suitable for a given purpose


2. the test is very suitable for that purpose;
3. the test is adequate
4. the test is inadequate
5. the test is irrelevant and therefore unsuitable

It is important to select suitable people to rate a test (e.g. questionnaire, interview, IQ test
etc.). For example, individuals who actually take the test would be well placed to judge its
face validity. Also people who work with the test could offer their opinion (e.g. employers,
university administrators, employers). Finally, the researcher could use members of the
general public with an interest in the test (e.g. parents of testees, politicians, teachers etc.).

The face validity of a test can be considered a robust construct only if a reasonable level of
agreement exists among raters.
It should be noted that the term face validity should be avoided when the rating is done by
"expert" as content validity is more appropriate. Having face validity does not mean that a
test really measures what the researcher intends to measure, but only in the judgment of
raters that it appears to do so. Consequently it is a crude and basic measure of validity.

A test item such as 'I have recently thought of killing myself' has obvious face validity as an
item measuring suicidal cognitions, and may be useful when measuring symptoms of
depression. However, the implications of items on tests with clear face validity is that they
are more vulnerable to social desirability bias. Individuals may manipulate their response to
deny or hide problems, or exaggerate behaviors to present a positive images of themselves.

It is possible for a test item to lack face validity but still have general validity and measure
what it claims to measure. This is good because it reduces demand characteristics and
makes it harder for respondents to manipulate their answers. For example, the test item 'I
believe in the second coming of Christ' would lack face validity as a measure of depression
(as the purpose of the item is unclear).

This item appeared on the first version of The Minnesota Multiphasic Personality Inventory
(MMPI) and loaded on the depression scale. Because most of the original normative sample
of the MMPI were good Christians only a depression Christian would think Christ is not
coming back. Thus, for this particular religious sample the item does have general validity,
but not face validity.

What is construct validity in research?


Construct validity was invented by Cornball and Meehl (1955). This type of validity refers to
the extent to which a test captures a specific theoretical construct or trait, and it overlaps
with some of the other aspects of validity. Construct validity does not concern the simple,
factual question of whether a test measures an attribute.

Instead it is about the complex question of whether test score interpretations are consistent
with a nomological network involving theoretical and observational terms (Cronbach &
Meehl, 1955).

To test for construct validity it must be demonstrated that the phenomenon being
measured actually exists. So, the construct validity of a test for intelligence, for example, is
dependent on a model or theory of intelligence. Construct validity entails demonstrating the
power of such a construct to explain a network of research findings and to predict further
relationships.

The more evidence a researcher can demonstrate for a test's construct validity the better.
However, there is no single method of determining the construct validity of a test. Instead,
different methods and approaches are combined to present the overall construct validity of
a test. For example, factor analysis and correlational methods can be used.

What is concurrent validity in research?


This is the degree to which a test corresponds to an external criterion that is known
concurrently (i.e. occurring at the same time). If the new test is validated by a comparison
with a currently existing criterion, we have concurrent validity. Very often, a new IQ or
personality test might be compared with an older but similar test known to have good
validity already.
What is predictive validity in research?
This is the degree to which a test accurately predicts a criterion that will occur in the future.
For example, a prediction may be made on the basis of a new intelligence test, that high
scorers at age 12 will be more likely to obtain university degrees several years later. If the
prediction is born out then the test has predictive validity.

Q.2 Explain the importance of population and sample of the study in research. Define
the process of sampling for quantitative research approach.

Every ten years, the U.S. government conducts a census—a count of every person living in
the country—as required by the constitution. It’s a massive undertaking. The Census Bureau
sends a letter or a worker to every U.S. household and tries to gather data that will allow
each person to be counted. After the data are gathered, they have to be processed,
tabulated and reported. The entire operation takes years of planning and billions of dollars,
which begs the question: Is there a better way?

As it turns out, there is.

Instead of contacting every person in the population, researchers can answer most
questions by sampling people. In fact, sampling is what the Census Bureau does in order to
gather detailed information about the population such as the average household income,
the level of education people have, and the kind of work people do for a living. But what,
exactly, is sampling, and how does it work? At its core, a research sample is like any other
sample: It’s a small piece or part of something that represents a larger whole.

So, just like the sample of glazed salmon you eat at Costco or the double chocolate brownie
ice cream you taste at the ice cream shop, behavioral scientists often gather data from a
small group (a sample) as a way to understand a larger whole (a population). Even when the
population being studied is as large as the U.S.—about 330 million people—researchers
often need to sample just a few thousand people in order to understand everyone.

Now, you may be asking yourself how that works. How can researchers accurately
understand hundreds of millions of people by gathering data from just a few thousand of
them? Your answer comes from Valery Ivanovich Glivenko and Francesco Paolo Cantelli.

Glivenko and Cantelli were mathematicians who studied probability. At some point during
the early 1900s, they discovered that several observations randomly drawn from a
population will naturally take on the shape of the population distribution. What this means
in plain English is that, as long as researchers randomly sample from a population and
obtain a sufficiently sized sample, then the sample will contain characteristics that roughly
mirror those of the population.

Defining Random vs. Non-Random Sampling

Random sampling occurs when a researcher ensures every member of the population being
studied has an equal chance of being selected to participate in the study. Importantly, ‘the
population being studied’ is not necessarily all the inhabitants of a country or a region.
Instead, a population can refer to people who share a common quality or characteristic. So,
everyone who has purchased a Ford in the last five years can be a population and so can
registered voters within a state or college students at a city university. A population is the
group that researchers want to understand.

In order to understand a population using random sampling, researchers begin by


identifying a sampling frame—a list of all the people in the population the researchers want
to study. For example, a database of all landline and cell phone numbers in the U.S. is a
sampling frame. Once the researcher has a sampling frame, he or she can randomly select
people from the list to participate in the study.

However, as you might imagine, it is not always practical or even possible to gather a
sampling frame. There is not, for example, a master list of all the people who use the
internet, purchase coffee at Dunkin’, have grieved the death of a parent in the last year, or
consider themselves fans of the New York Yankees. Nevertheless, there are very good
reasons why researchers may want to study people in each of these groups. When it isn’t
possible or practical to gather a random sample, researchers often gather a non-random
sample. A non-random sample is one in which every member of the population being
studied does not have an equal chance of being selected into the study.

Because non-random samples do not select participants based on probability, it is often


difficult to know how well the sample represents the population of interest. Despite this
limitation, a wide range of behavioral science studies conducted within academia, industry
and government rely on non-random samples. When researchers use non-random samples,
it is common to control for any known sources of sampling bias during data collection. By
controlling for possible sources of bias, researchers can maximize the usefulness and
generalizability of their data.

Why Is Sampling Important for Researchers?

Everyone who has ever worked on a research project knows that resources are limited;
time, money and people never come in an unlimited supply. For that reason, most research
projects aim to gather data from a sample of people, rather than from the entire population
(the census being one of the few exceptions). This is because sampling allows researchers
to:

Save Time

Contacting everyone in a population takes time. And, invariably, some people will not
respond to the first effort at contacting them, meaning researchers have to invest more
time for follow-up. Random sampling is much faster than surveying everyone in a
population, and obtaining a non-random sample is almost always faster than random
sampling. Thus, sampling saves researchers lots of time.
Save Money

The number of people a researcher contacts is directly related to the cost of a study.
Sampling saves money by allowing researchers to gather the same answers from a sample
that they would receive from the population. Non-random sampling is significantly cheaper
than random sampling, because it lowers the cost associated with finding people and
collecting data from them. Because all research is conducted on a budget, saving money is
important.

Collect Richer Data

Sometimes, the goal of research is to collect a little bit of data from a lot of people (e.g., an
opinion poll). At other times, the goal is to collect a lot of information from just a few people
(e.g., a user study or ethnographic interview). Either way, sampling allows researchers to ask
participants more questions and to gather richer data than does contacting everyone in a
population.

The Importance of Knowing Where to Sample


Efficient sampling has a number of benefits for researchers. But just as important as
knowing how to sample is knowing where to sample. Some research participants are better
suited for the purposes of a project than others. Finding participants that are fit for the
purpose of a project is crucial, because it allows researchers to gather high-quality data.

For example, consider an online research project. A team of researchers who decides to
conduct a study online has several different sources of participants to choose from. Some
sources provide a random sample, and many more provide a non-random sample. When
selecting a non-random sample, researchers have several options to consider. Some studies
are especially well-suited to an online panel that offers access to millions of different
participants worldwide. Other studies, meanwhile, are better suited to a crowdsourced site
that generally has fewer participants overall but more flexibility for fostering participant
engagement.
Methods of sampling in quantitative research

It would normally be impractical to study a whole population, for example when doing a
questionnaire survey. Sampling is a method that allows researchers to infer information
about a population based on results from a subset of the population, without having to
investigate every individual. Reducing the number of individuals in a study reduces the cost
and workload, and may make it easier to obtain high quality information, but this has to be
balanced against having a large enough sample size with enough power to detect a true
association. (Calculation of sample size is addressed in section 1B (statistics) of the Part A
syllabus.)
If a sample is to be used, by whatever method it is chosen, it is important that the
individuals selected are representative of the whole population. This may involve specifically
targeting hard to reach groups. For example, if the electoral roll for a town was used to
identify participants, some people, such as the homeless, would not be registered and
therefore excluded from the study by default.

There are several different sampling techniques available, and they can be subdivided into
two groups: probability sampling and non-probability sampling. In probability (random)
sampling, you start with a complete sampling frame of all eligible individuals from which you
select your sample. In this way, all eligible individuals have a chance of being chosen for the
sample, and you will be more able to generalise the results from your study. Probability
sampling methods tend to be more time-consuming and expensive than non-probability
sampling. In non-probability (non-random) sampling, you do not start with a complete
sampling frame, so some individuals have no chance of being selected. Consequently, you
cannot estimate the effect of sampling error and there is a significant risk of ending up with
a non-representative sample which produces non-generalisable results. However, non-
probability sampling methods tend to be cheaper and more convenient, and they are useful
for exploratory research and hypothesis generation.

Probability Sampling Methods

1. Simple random sampling

In this case each individual is chosen entirely by chance and each member of the population
has an equal chance, or probability, of being selected. One way of obtaining a random
sample is to give each individual in a population a number, and then use a table of random
numbers to decide which individuals to include.1 For example, if you have a sampling frame
of 1000 individuals, labelled 0 to 999, use groups of three digits from the random number
table to pick your sample. So, if the first three numbers from the random number table
were 094, select the individual labelled “94”, and so on.

As with all probability sampling methods, simple random sampling allows the sampling error
to be calculated and reduces selection bias. A specific advantage is that it is the most
straightforward method of probability sampling. A disadvantage of simple random sampling
is that you may not select enough individuals with your characteristic of interest, especially
if that characteristic is uncommon. It may also be difficult to define a complete sampling
frame and inconvenient to contact them, especially if different forms of contact are
required (email, phone, post) and your sample units are scattered over a wide geographical
area.

2. Systematic sampling

Individuals are selected at regular intervals from the sampling frame. The intervals are
chosen to ensure an adequate sample size. If you need a sample size n from a population of
size x, you should select every x/nth individual for the sample. For example, if you wanted a
sample size of 100 from a population of 1000, select every 1000/100 = 10 th member of the
sampling frame.
Systematic sampling is often more convenient than simple random sampling, and it is easy
to administer. However, it may also lead to bias, for example if there are underlying
patterns in the order of the individuals in the sampling frame, such that the sampling
technique coincides with the periodicity of the underlying pattern. As a hypothetical
example, if a group of students were being sampled to gain their opinions on college
facilities, but the Student Record Department’s central list of all students was arranged such
that the sex of students alternated between male and female, choosing an even interval
(e.g. every 20th student) would result in a sample of all males or all females. Whilst in this
example the bias is obvious and should be easily corrected, this may not always be the case.

3. Stratified sampling
In this method, the population is first divided into subgroups (or strata) who all share a
similar characteristic. It is used when we might reasonably expect the measurement of
interest to vary between the different subgroups, and we want to ensure representation
from all the subgroups. For example, in a study of stroke outcomes, we may stratify the
population by sex, to ensure equal representation of men and women. The study sample is
then obtained by taking equal sample sizes from each stratum. In stratified sampling, it may
also be appropriate to choose non-equal sample sizes from each stratum. For example, in a
study of the health outcomes of nursing staff in a county, if there are three hospitals each
with different numbers of nursing staff (hospital A has 500 nurses, hospital B has 1000 and
hospital C has 2000), then it would be appropriate to choose the sample numbers from each
hospital proportionally (e.g. 10 from hospital A, 20 from hospital B and 40 from hospital C).
This ensures a more realistic and accurate estimation of the health outcomes of nurses
across the county, whereas simple random sampling would over-represent nurses from
hospitals A and B. The fact that the sample was stratified should be taken into account at
the analysis stage.
Stratified sampling improves the accuracy and representativeness of the results by reducing
sampling bias. However, it requires knowledge of the appropriate characteristics of the
sampling frame (the details of which are not always available), and it can be difficult to
decide which characteristic(s) to stratify by.

4. Clustered sampling

In a clustered sample, subgroups of the population are used as the sampling unit, rather
than individuals. The population is divided into subgroups, known as clusters, which are
randomly selected to be included in the study. Clusters are usually already defined, for
example individual GP practices or towns could be identified as clusters. In single-stage
cluster sampling, all members of the chosen clusters are then included in the study. In two-
stage cluster sampling, a selection of individuals from each cluster is then randomly selected
for inclusion. Clustering should be taken into account in the analysis. The General Household
survey, which is undertaken annually in England, is a good example of a (one-stage) cluster
sample. All members of the selected households (clusters) are included in the survey.1
Cluster sampling can be more efficient that simple random sampling, especially where a
study takes place over a wide geographical region. For instance, it is easier to contact lots of
individuals in a few GP practices than a few individuals in many different GP practices.
Disadvantages include an increased risk of bias, if the chosen clusters are not representative
of the population, resulting in an increased sampling error.

Non-Probability Sampling Methods

1. Convenience sampling

Convenience sampling is perhaps the easiest method of sampling, because participants are
selected based on availability and willingness to take part. Useful results can be obtained,
but the results are prone to significant bias, because those who volunteer to take part may
be different from those who choose not to (volunteer bias), and the sample may not be
representative of other characteristics, such as age or sex. Note: volunteer bias is a risk of all
non-probability sampling methods.

2. Quota sampling

This method of sampling is often used by market researchers. Interviewers are given a quota
of subjects of a specified type to attempt to recruit. For example, an interviewer might be
told to go out and select 20 adult men, 20 adult women, 10 teenage girls and 10 teenage
boys so that they could interview them about their television viewing. Ideally the quotas
chosen would proportionally represent the characteristics of the underlying population.

Whilst this has the advantage of being relatively straightforward and potentially
representative, the chosen sample may not be representative of other characteristics that
weren’t considered (a consequence of the non-random nature of sampling). 2

3. Judgement (or Purposive) Sampling

Also known as selective, or subjective, sampling, this technique relies on the judgement of
the researcher when choosing who to ask to participate. Researchers may implicitly thus
choose a “representative” sample to suit their needs, or specifically approach individuals
with certain characteristics. This approach is often used by the media when canvassing the
public for opinions and in qualitative research.
Judgement sampling has the advantage of being time-and cost-effective to perform whilst
resulting in a range of responses (particularly useful in qualitative research). However, in
addition to volunteer bias, it is also prone to errors of judgement by the researcher and the
findings, whilst being potentially broad, will not necessarily be representative.

4. Snowball sampling

This method is commonly used in social sciences when investigating hard-to-reach groups.
Existing subjects are asked to nominate further subjects known to them, so the sample
increases in size like a rolling snowball. For example, when carrying out a survey of risk
behaviours amongst intravenous drug users, participants may be asked to nominate other
users to be interviewed.

Snowball sampling can be effective when a sampling frame is difficult to identify. However,
by selecting friends and acquaintances of subjects already investigated, there is a significant
risk of selection bias (choosing a large number of people with similar characteristics or views
to the initial individual identified).

Bias in sampling

There are five important potential sources of bias that should be considered when selecting
a sample, irrespective of the method used. Sampling bias may be introduced when: 1

1. Any pre-agreed sampling rules are deviated from


2. People in hard-to-reach groups are omitted
3. Selected individuals are replaced with others, for example if they are difficult to contact
4. There are low response rates
5. An out-of-date list is used as the sample frame (for example, if it excludes people who
have recently moved to an area)
Q.3 Develop a research proposal on the “Role of secondary school environment in
promoting moral values of society”.

Introduction

People do not live their lives in moral or ethical isolation but grow up within particular moral
traditions (Reiss, 1999). Liberal democracy can only flourish if its citizens hold certain moral
and civic values, and manifest certain virtues (Althof & Berkowitz, 2006). In the modern era,
technology is affecting society in ubiquitous fashion while maintaining its upright position,
and both science and technology are also being influenced by society. The rapid advances in
science and technology and increased societal complexities also underpin the importance of
morals, values and ethics and their benefits to society. Morals refer to human behavior
where morality is the practical activity and, ethics describes the theoretical, systematic, and
rational reflection upon that human behavior (Churchill, 1982). Values are linked to beliefs
and attitudes and guide human behavior (Rennie, 2007). Morals, values, and ethics are
strongly attached to society, spirituality and culture (United Nations Educational Scientific
and Cultural Organization, 1991). There are three meaning of ethics. Firstly, ethics is
commonly taken as a synonym for morality, the universal values and standards of conduct
that every rational person wants every other to follow. Secondly, ethics is a well‐established
branch of philosophy that studies the sources of human values and standards, and struggle
to locate them within theories of human individual and social condition. Thirdly,
professional ethics, and it is not universal nor is it ethical theory; it refers to the special
codes of conduct adhered to by those who are engaged in a common pursuit. Professional
ethics is an integral part of the concept of a profession (Kovac, 1996).

A wide range of misunderstandings and misconceptions surround morals, values and ethics
(Churchill, 1982). Morals, values and ethics are sometime difficult to understand because
the misunderstandings and misconceptions surrounding them hinder arrival at the correct
explanation. The objective of moral education lies in the fact that it can develop shared
feelings with others, and makes one committed to one’s own personal responsibilities and
actions (Campbell, 2008). Moral agency is a dual state that encompasses the teacher as a
moral person engaged in ethical teaching through professional conduct and, as a moral
educator who teachesstudents with the same core values and principlesthat he orshe strives
to uphold in practice (Campbell, 2003). Ethical knowledge can best capture the essence of
teaching professionalism as it enables the teachers to appreciate the complexities of their
moral agency (Campbell, 2008). Ethics is firmly connected to virtues of responsibility, trust
and credibility. It should always be fair, honest, transparent, and respectful of the rights and
privacy of others in society (Frank et al., 2011). Numerous sets of values exist in society. In
the context of science, three particular domains of values are present in society: the values
associated with education, values of science and values of science education. These three
values remain in close proximity, and interact or overlap with one another (Hildebrand,
2007). Thusscience cannot be isolated from society. Values in science education include
values associated with teaching science in schools, epistemic values of science, societal
values and the personal values of scientists. The existence of value is not context specific.
For example, western science has different values from other indigenous science value sets
(Corrigan, Cooper, Keast, & King, 2010). Morality, values and ethics are always connected
and interrelated to society, and attached to societal culture, which are constantly influenced
by politics (Unesco, 1991; Witz, 1996).

A comparative study between the philosophical and theoretical basis of modern Western
moral education and the universal Islamic moral values and education is outlined to the
extent of gaining benefit and developing an enriched theoretical framework of moral and
character education that may increase the universal acceptability of the Western theoretical
framework of moral and character education. A range of teaching, learning and pedagogical
techniques are proposed with emphases on the specific domain ofscience education to
foster morals, values and ethics in students’ minds and develop various skills and attributes
necessary for success in the sciences. The proposed techniques and issues may help to
improve students’ moral and ethical understanding and reasoning, problem‐solving, and
decision‐making. Successful implementation of the proposed techniques and issues may
also help to reverse students’ demotivation and disengagement in sciences, which are
currently the most pressing needs to address. Through the proposed changes students are
able to grasp the social implications of their science studies, and understand the business
consequences and control the environment; they can reflect on how science and technology
considerations differ from personal and political values, find various limitations
Morals and Ethics

Morality and ethics are part of a way of life and cannot be separated from all other aspects
of life experiences (Kang & Glassman, 2010). Moral education aims at promoting students’
moral development and character formation. The theoretical framework of moral education
is supported by moral philosophy, moral psychology and moral educational practices (Han,
2014). Beyond the scope of promoting rational pro‐social skills or virtues, moral education
of real human value should cultivate the meaningful and personally formative knowledge
that significantly transcend or avoid natural and/or social scientific understanding and
explanation (Carr, 2014). Moral education is about an inner change, which is a spiritual
matter and comes through the internalization of universal Islamic values (Halstead, 2007).
Ethics is the branch of philosophy which tries to probe the reasoning behind our moral life.
The critical examination and analysis through the concepts and principles of ethics help to
justify our moral choices and actions (Reiss, 1999). In real‐life situation ‘ethics’ is frequently
used as a more consensual word than ‘morals’ which is less favored. Many students and
professionals cannot find the sharp distinction between these two terms (McGavin, 2013).
Recently moral thinking and moral action were explored using a Deweyan framework, and it
was concluded that moral thinking orreasoning exists associal capital, and it is not a guide to
moral action (Kang& Glassman, 2010). The key philosophical question for the study and
promotion of moral education relies on the epistemic status of moral reflection or
understanding and moral agency (Carr, 2014).

Brief Summary

Thus rigorous synthesis of various philosophies, methods and goals of moral and character
education based on solid empirical and theoretical research (Althof & Berkowitz, 2006) can
enable us to conceptualize and articulate a solid theoretical framework that guidesto
optimally designing school programsto effectively foster morals, values, ethics and character
education, and ultimately benefit society.

In the 21st century it is not surprising that many young students will face the ethical issues
raised by science that are too often lacking in theirscience education (Reiss, 1999). Values,
morality and ethics are part of our life and these cannot be separated from society
(Corrigan, Dillon & Gunstone, 2007; Kang & Glassman, 2010). Morals, ethics and values are
different branches of knowledge that have different theories and philosophies. Science
teachers are generally educated in science, and not in moral or ethical philosophy. It is
therefore unrealistic and unfair to expect them to teach ethics (Reiss, 1999) and morals as
separate but essential elements of science teaching. Again, teaching is fundamentally a
moral enterprise (Bullough Jr, 2011). Thus teachers have the responsibility to engage in
moral activities through their teaching profession. In science education, morals, values,
ethics and character education cannot be taught as a separate curriculum. But all these
essential elements should be entwined in all science curricula, and ranges of different but
appropriate teaching techniques are required to apply in teaching them (Anderson, 2000;
Berkowitz, 1999; Unesco, 1991). And students are required to look both at the
consequences of any proposed course of action and at relevant intrinsic considerations
before reaching any moral/ethical conclusion (Reiss, 1999). Such integrated science
curricula can help students achieve a clear understanding of the moral and ethical
ramifications of science.

anticipate in a reasoned discussion about the ethical issues in science which necessitates
incorporation of ethicsinto science teaching. Three components were suggested as keys to
promoting effective discussions related to ethics and science (Chowning, 2005): content and
lesson strategies, a decision‐making model, and a familiarity with ethical perspectives. The
strategies based on these three components may allow teachers to confidently address
ethical issuesin science. In this way teachers can help students develop understanding
ofscience as a social enterprise, and students can develop theirskillsto apply in the science
classroom. Other researchers (Frank et al., 2011) put forward their rationale to address
ethics within university curricula since multicultural societies are developing all around the
world without shared moral values. Thus in the university curricula an introduced course in
ethics should convey knowledge and encourage a culture for fostering a developed mind
through amended or reformed thinking. There is no agreement on the frontiers in morals or
science. In the past centuries, moral conclusions of humans are found to be more stable
than scientific conclusions. This is because moral responsibility demands a level of agency,
and people are responsible for their values as they are not for science. This comment
(Rolston, 1988) was supported by an important study (Bell & Lederman, 2003) conducted on
a range of professors of geographically diverse universities who had varied ranges of
expertise, and displayed their reasoning differently on the understanding about the nature
of science. It was revealed that the personal values of all academics were the principal
influence on their decision‐making processes. Other factors included morality or ethics, and
social issues.

Role‐Play and Discussions

Based on classroom exercise role‐plays and discussions can be effective to sharpen critical
thinking and develop an appreciation of ethical aptitudes (Rosnow, 1990). Role‐plays based
on dual‐use of dilemmas motivate students’ active engagement with ethical issues, and
work as a catalyst for developing critical, analytical, argumentative and verbal skills. This
activity should be done in an enjoyable and non‐threatening way (Johnson, 2010). Various
situations involving ethical dilemmas can be given to students for discussion. Teachers can
participate in discussions and constantly monitor students’ reactions whether positive or
negative, and students’ judgements. A set of examples of situations with ethical dilemmas
are reported (Rosnow, 1990) that can be useful to teachers or they can find them from any
valid sources. At the end of discussions, students should be able to understand their own
ethical assumptions, and compare them with the acceptable norm. Importantly, students
will be able to understand any bias that can distort the ethical standpoint or be convinced in
eliminating any ethical ambiguities that may exist in their minds.

Historical Case Studies

Students generally respond very well to case studies (Kovac, 1996). Research shows that
scientific values can be introduced through historical case studies as a valuable tool for
teaching (Allchin, 1999).

However, the challenging task is how to identify the facts of a case, recognize the underlying
ethical dilemmas, and understand different perspectives involved (Chowning, 2005). A
recommended approach (Allchin, 1999) to teach scientific ethics is through a case method in
which students are introduced to ethical questions surrounding any realistic situation. In
this context, teachers need to understand the multi‐faceted relationship between science
and values, and appreciate the nature of science through reflexive exercises and case
studies. It is important to gain a historical view of science. The disasters that occurred in
Seveso and Bhopal may be up for discussion to link sciences and humanities; and instigate a
fruitful dialog between the faculty and the students (Frank et al., 2011), and thus can fulfil
the requirements for a successful case study implementation. A range of researches
(Zeidler, Sadler, Simmons & Howes, 2005) supported the efficacy of using controversial
socioscientific case studies to foster critical thinking, and moral/ethical development. Each
socio‐scientific issue can provide an environment for engaging students in debate and
reflection that positively impact on their cognitive and moral development. In addressing
both sociological and psychological ramifications of curriculum and classroom practices, the
case‐based socio‐scientific issues may be applied as a pedagogical strategy.

Brief Summary

Many educators acknowledge the necessity for aligning science curriculum design with
cognitive and affective goals. Students want to see real‐life science applications and
practical implications such as experience in industrial settings and dealing with various
problem‐solving issues that can interest them in the sciences. Students can perceive their
science knowledge as useful and relevant when they consider scientific topics such as
medical, health, environment, energy, materials science and industry‐based matters
(Chowdhury, 2014) and value‐oriented and ethical issues related to science are presented to
them in a plausible and intelligible way. There is strong evidence that students like ethical
issues to be more widely addressed in science education than is often the case (Reiss, 1999).
Hence the presented teaching techniques, methods and important issues may significantly
impact on students’ critical thinking, values, morality, ethics, and character development.
And at the same time addressing ethical issues will provide the opportunity to learn applied
science and associated business consequences; help students build solid foundations in
science and enable further acquisition of scientific knowledge that considers culture and
context in making decisions, and relate their knowledge to other knowledge. Students gain
the capability to apply their scientific knowledge in understanding and controlling
environments. They are able to reflect on science, technology, and decisions, various
limitations of science, differences between science and technology, and how science and
technology considerations differ from personal and political values (Roberts, 1982). Overall,
these presented teaching techniques, methods and important issues will enhance student
motivation and engagement hence producing better informed future citizens.
Q.4 Describe the essential stages of quantitative data analysis.

Quantitative Data: Definition


Quantitative data is defined as the value of data in the form of counts or numbers where
each data-set has an unique numerical value associated with it. This data is any quantifiable
information that can be used for mathematical calculations and statistical analysis, such that
real-life decisions can be made based on these mathematical derivations. Quantitative data
is used to answer questions such as “How many?”, “How often?”, “How much?”. This data
can be verified and can also be conveniently evaluated using mathematical techniques. For
example, there are quantities corresponding to various parameters, for instance, “How
much did that laptop cost?” is a question which will collect quantitative data. There are
values associated with most measuring parameters such as pounds or kilograms for weight,
dollars for cost etc. Quantitative data makes measuring various parameters controllable due
to the ease of mathematical derivations they come with. Quantitative data is usually
collected for statistical analysis using surveys, polls or questionnaires sent across to a
specific section of a population. The retrieved results can be established across a
population.

Types of Quantitative Data with Examples

The most common types of quantitative data are as below:

• Counter: Count equated with entities. For example, the number of people who download
a particular application from the App Store.
• Measurement of physical objects: Calculating measurement of any physical thing. For
example, the HR executive carefully measures the size of each cubicle assigned to the
newly joined employees.
• Sensory calculation: Mechanism to naturally “sense” the measured parameters to create
a constant source of information. For example, a digital camera converts electromagnetic
information to a string of numerical data.
• Projection of data: Future projection of data can be done using algorithms and other
mathematical analysis tools. For example, a marketer will predict an increase in the sales
after launching a new product with thorough analysis.
• Quantification of qualitative entities: Identify numbers to qualitative information. For
example, asking respondents of an online survey to share the likelihood of
recommendation on a scale of 0-10.

Quantitative Data: Collection Methods


As quantitative data is in the form of numbers, mathematical and statistical analysis of these
numbers can lead to establishing some conclusive results.

There are two main Quantitative Data Collection Methods:

Surveys: Traditionally, surveys were conducted using paper-based methods and have
gradually evolved into online mediums. Closed-ended questions form a major part of these
surveys as they are more effective in collecting quantitative data. The survey makes include
answer options which they think are the most appropriate for a particular question. Surveys
are integral in collecting feedback from an audience which is larger than the conventional
size. A critical factor about surveys is that the responses collected should be such that they
can be generalized to the entire population without significant discrepancies. On the basis
of the time involved in completing surveys, they are classified into the following –

• Longitudinal Studies: A type of observational research in which the market researcher


conducts surveys from a specific time period to another, i.e., over a considerable course
of time, is called longitudinal survey. This survey is often implemented for trend
analysis or studies where the primary objective is to collect and analyze a pattern in data.
• Cross-sectional Studies: A type of observational research in which the market research
conducts surveys at a particular time period across the target sample is known as cross-
sectional survey. This survey type implements a questionnaire to understand a specific
subject from the sample at a definite time period.
To administer a survey to collect quantitative data, the below principles are to be followed.
• Fundamental Levels of Measurement – Nominal, Ordinal, Interval and Ratio
Scales: There are four measurement scales which are fundamental to creating a multiple-
choice question in a survey in collecting quantitative data. They are, nominal, ordinal,
interval and ratio measurement scales without the fundamentals of which, no multiple
choice questions can be created.
• Use of Different Question Types: To collect quantitative data, close-ended
questions have to be used in a survey. They can be a mix of multiple question
types including multiple-choice questions like semantic differential scale questions, rating
scale questions etc. that can help collect data that can be analyzed and made sense of.
• Survey Distribution and Survey Data Collection: In the above, we have seen the process
of building a survey along with the survey design to collect quantitative data. Survey
distribution to collect data is the other important aspect of the survey process. There are
different ways of survey distribution. Some of the most commonly used methods are:

• Email: Sending a survey via email is the most commonly used and most effective
methods of survey distribution. You can use the QuestionPro email
management feature to send out and collect survey responses.
• Buy respondents: Another effective way to distribute a survey and collect quantitative
data is to use a sample. Since the respondents are knowledgeable and also are open to
participating in research studies, the responses are much higher.
• Embed survey in a website: Embedding a survey in a website increases a high number
of responses as the respondent is already in close proximity to the brand when the
survey pops up.
• Social distribution: Using social media to distribute the survey aids in collecting higher
number of responses from the people that are aware of the brand.
• QR code: QuestionPro QR codes store the URL for the survey. You can print/publish
this code in magazines, on signs, business cards, or on just about any object/medium.
• SMS survey: A quick and time effective way of conducting a survey to collect a high
number of responses is the SMS survey.
• QuestionPro app: The QuestionPro App allows to quickly circulate surveys and the
responses can be collected both online and offline.
• API integration: You can use the API integration of the QuestionPro platform for
potential respondents to take your survey.

Q.5 Discuss main parts of a research report. Give at least two different sample formats
from two international universities to present different report formats.

Research Reports: Definition


Research reports are recorded data prepared by researchers or statisticians after analyzing
information gathered by conducting organized research, typically in the form
of surveys or qualitative methods.

Reports usually are spread across a vast horizon of topics but are focused on communicating
information about a particular topic and a very niche target market. The primary motive of
research reports is to convey integral details about a study for marketers to consider while
designing new strategies. Certain events, facts and other information based on incidents
need to be relayed on to the people in charge and creating research reports is the most
effective communication tool. Ideal research reports are extremely accurate in the offered
information with a clear objective and conclusion. There should be a clean and structured
format for these reports to be effective in relaying information. A research report is a
reliable source to recount details about a conducted research and is most often considered
to be a true testimony of all the work done to garner specificities of research.

The various sections of a research report are:

1. Summary
2. Background/Introduction
3. Implemented Methods
4. Results based on Analysis
5. Deliberation
6. Conclusion

Components of Research Reports


Research is imperative for launching a new product/service or a new feature. The markets
today are extremely volatile and competitive due to new entrants every day who may or
may not provide effective products. An organization needs to make the right decisions at
the right time to be relevant in such a market with updated products that suffice customer
demands.

The details of a research report may change with the purpose of research but the main
components of a report will remain constant. The research approach of the market
researcher also influences the style of writing reports. Here are seven main components of a
productive research report:

• Research Report Summary: The entire objective along with the overview of research are
to be included in a summary which is a couple of paragraphs in length. All the multiple
components of the research are explained in brief under the report summary. It should
be interesting enough to capture all the key elements of the report.
• Research Introduction: There always is a primary goal that the researcher is trying to
achieve through a report. In the introduction section, he/she can cover answers related to
this goal and establish a thesis which will be included to strive and answer it in detail. This
section should answer an integral question: “What is the current situation of the
goal?”. After the research was conducted, did the organization conclude the goal
successfully or they are still a work in progress – provide such details in the introduction
part of the research report.
• Research Methodology: This is the most important section of the report where all the
important information lies. The readers can gain data for the topic along with analyzing
the quality of provided content and the research can also be approved by other market
researchers. Thus, this section needs to be highly informative with each aspect of
research discussed in detail. Information needs to be expressed in chronological order
according to its priority and importance. Researchers should include references in case
they gained information from existing techniques.
• Research Results: A short description of the results along with calculations conducted to
achieve the goal will form this section of results. Usually, the exposition after data
analysis is carried out in the discussion part of the report.
• Research Discussion: The results are discussed in extreme detail in this section along with
a comparative analysis of reports that could probably exist in the same domain. Any
abnormality uncovered during research will be deliberated in the discussion
section. While writing research reports, the researcher will have to connect the dots on
how the results will be applicable in the real world.
• Research References and Conclusion: Conclude all the research findings along with
mentioning each and every author, article or any content piece from where references
were taken.
Research report format of department of counting education university of Oxford:

The purpose of the research proposal is to demonstrate that the research you wish to
undertake is significant, necessary and feasible, that you will be able to make an original
contribution to the field, and that the project can be completed within the normal time
period. Some general guidelines and advice on structuring your proposal are provided
below. Research proposals should be no longer than 3,000 words (excluding the reference
list/bibliography).

• Title sheet

This should include your name, the degree programme to which you are applying
and your thesis proposal title.

• Topic statement

This should establish the general subject area you will be working in and how your
topic relates to it. Explain briefly why your topic is significant and what contribution
your research will make to the field.

• Research aims

These should set out the specific aims of your research and, if appropriate to your
discipline, the main research questions.

• Review of the literature


Provide a brief review of the significant literature and current research in your field
to place your own proposed research in context and to establish its potential
contribution to the field.

• Study design / theoretical orientation

Outline the theoretical approaches taken in your topic and indicate which approach
or approaches you propose to use in your research and why you plan to do so.

• Research methods

Briefly describe your proposed research methods, including the type of information
and sources to be used, the main research methods to be employed, any resources
needed and any ethical or safety issues identified.

• Tentative chapter outline

You may wish to include a tentative chapter outline if available at this stage.

• References/Bibliography

List all publications cited in your proposal using a suitable academic referencing
system. (Not included in the 3,000 word count.)

Research report writing sample of Victoria university

Below is the template/example of research report writing of Victoria university Australia .

Title page

This includes detail of the research topic supervisor name and supervisee name.

Summary

This report discusses the changes that have occurred in the Australian workforce since the
end of World War II (1945-2000). A review of some of the available literature provides
insights into the changing role of women and migrants in the workforce, and the influence
of new technologies and changing levels of unemployment have also been considered.
The information presented in this report has been gathered from secondary sources, and
from Australian Bureau of Statistics’ data. The report has been prepared for submission as
Unit 4 of the Tertiary Studies Course at Victoria University

Table of content

In this section you write up all of the essential and significant headings that are contributing
main part of your research.

1.Introduction

The profile of the Australian workforce has altered markedly since the end of World War II.
Australia has transform ed from a nation of predom inantly Anglo-Celtic cultu re and almost
full em ployment to one of rich cultural diversity with relatively high unemployment. This
report exam ines ways in which our workforce has changed, focusing on the f ollowing
categories: women’s workforce participation rates, m igrant workers’ participation rates, e
mployment categories, unemployment rates and demographic profiles. This repor t will also
consider n ew influences affecting the workforce. This report is an assessable com ponent of
the Preparation for Tertiary Studies course at Victoria University of Technology, Werribee
Campus.

1.1 methodology

Information for this report was sourced from various secondary sources, all listed in the
Reference List. Data from publications by the Australian Bureau of Statistics also proved
valuable. This report is not a comprehensiv e review of the ava ilable literature, but provides
a broad overview of the topic.

1.2 scope of report

Wherever the term ‘workforce’ on its own is u sed, it is in reference to the Austr alian
workforce. Where the infor mation refers to a particular state, th is will be noted. The
period under consideration is 1945 to 2000, although where available data does not cover
the entire period, this is stated. The re port focuses on several key aspects of the Australian
workforce, and is not a com prehensive account of al l changes that have occurred in the
workforce since World War II.

2. findings

2.1 women’s workforce participant rate

The overall participation rate of women in the Australian workforce since the end of World
War II has in creased markedly. The absence of m ale workers during the war ‘brought into
the workforce considerable numbers of women who had not been employed before the war
broke out’ (Ryan and Conlon 1989, p. 137). However, many women gave up their jobs when
the men returned. Their rates of pay compared to men were reduced in th e post-war years
(Ryan and Conlon 1989, pp. 140-144). Edna Ryan and Anne Conlon provi de the following
table, which shows that the proportion of women in the manufacturing industry peaked
during the war, declined until 1959, and then began to increase gradually.

2.2 migrate worker’s participant rate

The years since the end of the S econd World War have seen an increase in immigration into
Australia and therefore an increase in the num ber of migrants in the workforce. Post-war
Australia saw th e rapid national developm ent of projects such as the Snowy Mountains
Hydro-Electric Scheme. This meant a great demand for labour, which the Australian
workforce could not fulfil at the tim e. Migrants were therefore encouraged to come to
Australia to f ill such jobs, res ulting in a period of high migrant employment (Carroll 1989, p.
48).

2.3 employee categories

The major types of employm ent dominating the workforce at th e conclusion of World War
Two differ greatly from the categories of employm ent available in recent times. After 1945,
the Governm ent encouraged m anufacturing. This was initially to provide employment for
returned servicemen, then later to lower imports as a m eans to ease its balance o f
payment difficulties. Rural ind ustries also prospered at this tim e due to a short supply of
food and basic commodities in countries badly ravaged by the war. By 1950, 28 per cent of
Australians were employed in secondary industries and 17 per cent in prim ary industries
(Carroll 1989, p48). The proportion of Australians employed in these areas has since fallen.
2.4 unemployement and demographic rules

Australia has experienced a dram atic increase in the rate o f unemployment since the end
of World W ar Two. At that tim e, and for approxim ately the next thirty years,
unemployment was virtually non-exis tent and work was readily available (Carroll 1989, p.
48). In 1970, the unem ployment rate was 1.5 per cent of the labour force, and the underem
ployment rate was less than 1 per cent (N orris and Wooden 1996, p. 8). Underemployment
is defined as part-time workers who would prefer to work more hours and full-time workers
who worked less than their usual hours for economic reasons.

3. conclusion

The Australian workforce has altered greatly in the fifty-five years since the end of World
War II. Many of these changes have been very positive, such as the growth of women in the
workforce, the formation of many new employment categories and the introduction of
migration to cope with a tim e of great industrial growth, which has in turn enriched our
culture.

4. recommendation

The information collected for this report provides a broad overview of key changes in the
Australian workforce. Further analysis would be possible if the relevant data for each year
from 1945-2000 was purchased from the Australian Bureau of Statistics. The reliance on
secondary sources has resulted in some patchy data. For example, it is not possible to
identify for any given year a breakdown of the Australian workforce by the following
categories.

You might also like