Psychological Research Notes
Psychological Research Notes
Psychological research is the scientific study of the mind and behavior. It involves conducting
experiments, surveys, observations, and other methods to gather information about how people
think, feel, and act. The goal is to understand patterns of behavior, mental processes, and how
different factors affect them.
nature
1. Scientific Approach
Psychological research uses the scientific method, which means it follows a series of steps to
investigate a question or problem.
Example:
If a psychologist wants to know if listening to music helps people study better, they might form
a question like: "Does listening to music while studying improve focus?" Then they will test it
by setting up an experiment.
2. Careful and Organized
Psychologists carefully plan their research. They make sure their experiments are done in a
controlled and organized way so they can trust the results. This means controlling things that
might affect the outcome, like distractions or the time of day.
Example:
In a study about music and focus, the psychologist might have two groups: one group studies
in silence, and the other listens to music. Both groups are treated the same in all other ways
(same study material, same amount of time, etc.) to ensure a fair test.
3. Based on Real Data (Empirical)
Psychologists rely on real data, which means facts and information they can measure or
observe, rather than just guesses or opinions.
Example:
The psychologist would collect data like how many answers each person got correct on a test
after studying with or without music. This data is real, measurable information.
4. Objective (Unbiased)
Psychological research tries to be objective, meaning the results should not be influenced by
the researcher’s personal feelings or beliefs.
Example:
If the psychologist likes music, they shouldn’t let their personal feelings affect how they
interpret the results. If the data shows that music didn’t improve focus, they must report it
honestly, without changing it based on their own opinions.
5. Different Research Methods
Psychologists use different ways to gather information. Some methods include experiments,
surveys, interviews, or observations.
Example:
• Experiment: A psychologist could test how different types of music affect
concentration.
• Survey: They might ask a group of students how often they listen to music while
studying and how well they think it helps.
• Observation: They might watch people studying in a library to see if they focus better
with music.
6. Can Involve Numbers (Quantitative) and Descriptions (Qualitative)
Psychological research can include numbers (like how many people answered correctly on a
test) or descriptions (like how students feel about studying with music).
Example:
• Quantitative: The number of correct answers in a test (e.g., 15 out of 20).
• Qualitative: The feelings of students when they were asked how they felt about
listening to music while studying (e.g., "I feel more relaxed when I listen to music").
7. Exploration and Explanation
Psychological research can either explore a new idea or try to explain something we already
know.
Example:
• Exploratory: A researcher might want to explore if pets help reduce stress. They may
not know the answer yet, so they start by collecting information.
• Explanatory: A researcher may already know that exercise improves mood, and now
they want to explain why this happens (e.g., because it releases feel-good chemicals in
the brain).
8. Ethics
Psychologists follow ethical guidelines to make sure research is done in a way that respects
the participants and keeps them safe.
Example:
If the psychologist is studying stress, they must make sure that no one gets hurt by the stress
they might cause in the experiment. Participants are also told about the study beforehand, and
they can choose whether to take part or not.
ETHICS OF RESEARCH
Ethics in psychological research are guidelines that ensure the research is conducted in a
responsible, respectful, and fair way. These guidelines protect participants, maintain the
integrity of the research, and promote trust in the findings. Here are the key ethics of research
in psychology:
1. Informed Consent
• What it means: Participants must know what the study is about, what will happen
during the study, and any possible risks before they agree to take part.
• Example:
Before starting an experiment, a researcher explains to participants that they will be
asked to complete a questionnaire about their feelings. The participants then choose
whether they want to participate.
2. Confidentiality
• What it means: Researchers must keep all the information from participants private.
Personal details and responses should not be shared without permission.
• Example:
If a researcher is studying how people feel about stress, they keep each participant's
answers anonymous and don't share any names or identifying information.
3. Right to Withdraw
• What it means: Participants have the right to leave the study at any time without any
negative consequences.
• Example:
If a participant feels uncomfortable during an experiment, they can stop participating
whenever they want, and the researcher must respect that decision.
4. No Harm (Physical or Psychological)
• What it means: Research should not cause harm to participants, whether physical,
emotional, or mental.
• Example:
A psychologist studying stress must avoid making participants feel extreme anxiety or
distress. They might give breaks, offer counseling, or ensure the experience is not too
overwhelming.
5. Deception (Only When Necessary)
• What it means: Deception is sometimes used in research, but only if it's absolutely
necessary for the study and if the benefits outweigh the risks. Participants should be
debriefed (informed about the truth) afterward.
• Example:
In an experiment where participants are tested on how they react to surprise situations,
they may not be told everything beforehand. However, after the study, they are told
exactly why deception was used.
6. Debriefing
• What it means: After the study, researchers should explain the true purpose of the
research and any deception used. This helps participants understand the study and
ensures they are not left confused or misinformed.
• Example:
After completing a stress-related experiment, participants are told exactly what the
study was about, why the study was important, and any deception that was used, if
applicable.
7. Respect for Participants
• What it means: Researchers must treat participants with dignity and respect
throughout the research process.
• Example:
A researcher must not pressure or coerce anyone into participating and should be
mindful of any individual differences or needs, ensuring everyone is treated equally and
fairly.
Summary:
In psychological research, ethics are the moral rules that guide how research is conducted.
These rules ensure that research is safe, fair, and respectful to participants:
1. Informed Consent: Participants agree to take part with full understanding.
2. Confidentiality: Personal information is kept private.
3. Right to Withdraw: Participants can leave the study at any time.
4. No Harm: Research should not harm participants.
5. Deception (if necessary): Deception is used only when essential, and participants are
debriefed afterward.
6. Debriefing: Explaining the purpose of the study after it’s over.
7. Respect: Treating participants fairly and respectfully.
These ethics help ensure that psychological research is conducted responsibly and safely for
everyone involved.
METHODS OF DATA COLLECTION
In psychological research, data collection methods are the ways researchers gather
information to understand human behavior, thoughts, and feelings. There are several different
methods, each suited for different types of research questions. Here's an easy breakdown of the
most common methods of data collection:
Case study
A case study is when a researcher looks closely at one person, group, or situation to learn a lot
about it. It’s like studying something in great detail to understand it better.
How a Case Study Works:
1. Deep Investigation: A case study involves gathering a lot of information from different
sources, like interviews, observations, tests, or records.
2. Detailed Study: Instead of just looking at a lot of people, the researcher focuses closely
on one person or group to learn everything possible about them.
3. Long-Term: Sometimes, case studies can go on for a long time to understand changes
over weeks, months, or even years.
Why Use Case Studies?
• Unique Information: Case studies give us a lot of detail that might not be found in
other research methods. They help us learn about rare conditions, special behaviors, or
unique situations.
• Real-Life Understanding: By focusing on one person or group, researchers can
understand how a condition affects them in their real life, which can lead to better
treatment or support.
Here's a short example of a case study:
Example:
A psychologist is studying a young boy named Tom who has trouble making friends and often
feels anxious in social situations.
• Step 1: Gathering Information
The psychologist interviews Tom’s parents and teachers. They learn that Tom has been
shy since he was a child and tends to avoid group activities.
• Step 2: Observing Behavior
The psychologist watches Tom during recess at school and notices that he plays alone
and often looks worried when other children approach him.
• Step 3: Analyzing the Data
After reviewing all the information, the psychologist concludes that Tom’s social
anxiety makes it difficult for him to connect with peers, which causes him to avoid
social situations altogether.
• Step 4: Conclusion
The psychologist suggests ways Tom can improve his social skills, like gradually
joining group activities and practicing conversation techniques with his parents.
In this case study, the psychologist used detailed observations and interviews to understand
Tom’s social anxiety and recommended ways to help him.
Summary:
• Questionnaire: A set of written questions to gather information from people (e.g.,
asking about exercise habits).
• Observation: Watching people in a natural setting to see how they behave (e.g.,
watching kids play at recess).
• Experiment: A controlled test to see how changing one thing affects something else
(e.g., testing if music helps people focus).
These methods help researchers collect data to understand human behavior better.
2. Unstructured Interview
• What it is: The interview is informal, with no fixed questions. The interviewer allows
the conversation to flow naturally.
• When to use: When you want detailed, personal answers and flexibility.
• Pros: Provides rich and detailed responses.
• Cons: Hard to analyze and compare responses.
3. Semi-Structured Interview
• What it is: A mix of structured and unstructured. The interviewer has a list of questions
but can ask follow-up questions based on responses.
• When to use: When you want flexibility but also need some consistency.
• Pros: Allows for deeper exploration while keeping things consistent.
• Cons: Some responses can still be hard to analyze.
5. Diagnostic Interview
• What it is: Used in clinical settings to assess mental health. The interviewer asks
specific questions to diagnose a condition.
• When to use: When you need to diagnose mental health issues.
• Pros: Helps in accurate diagnosis.
• Cons: Can be stressful for the participant.
6. Narrative Interview
• What it is: The participant tells their story or personal experience, and the interviewer
listens.
• When to use: When you want to understand someone's life story or experience.
• Pros: Gives deep, personal insights.
• Cons: Difficult to analyze because of the personal nature.
Summary:
• Structured: Fixed questions, easy to compare.
• Unstructured: Free-flowing conversation, deep insights.
• Semi-Structured: Some fixed questions, flexible.
• Focus Group: Group discussion, diverse views.
• Diagnostic: For diagnosing mental health.
• Narrative: Personal storytelling, rich details.
Example of an Interview:
Let’s say a researcher wants to learn about how teenagers feel about social media. They might
set up an interview with a teenager, asking questions like:
• "How often do you use social media?"
• "What do you like or dislike about it?"
• "How does social media make you feel?"
The researcher listens carefully to the answers and might ask follow-up questions based on
what the teenager says. This helps the researcher understand the teenager’s personal feelings
and experiences with social media.
2. Focus Group Discussion
A focus group is a method where a small group of people come together to discuss a specific
topic. A moderator (someone who guides the discussion) asks open-ended questions, and the
group talks about their opinions and experiences. The moderator helps keep the discussion on
track, and the researcher listens to learn from the group's views.
Example of a Focus Group Discussion:
Let’s say a company wants to improve its product, a new phone. They might set up a focus
group with 8–10 people who have used the phone. The moderator asks:
• "What do you think about the design of the phone?"
• "How easy is the phone to use?"
• "What features do you wish the phone had?"
The participants share their opinions and talk with each other. The researcher listens to their
ideas to understand what people like or don’t like about the phone and how the company can
improve it.
Summary:
• Interview: A one-on-one conversation where the researcher asks questions to
understand a person’s thoughts or experiences (e.g., asking a teenager about social
media).
• Focus Group Discussion: A group of people discuss a topic together, and the
researcher listens to understand different opinions (e.g., asking people about a new
product to get feedback).
Both methods allow researchers to gather in-depth information, but interviews focus on
individual answers, while focus groups allow for group discussions and sharing of ideas.
Summary:
Secondary data is information that someone else has already collected and published. It helps
researchers save time and money by using data that's already available. For example, if you
want to know how many people own pets in a city, you might use reports from animal
organizations or government surveys rather than doing your own research.
What is a Hypothesis?
A hypothesis is an educated guess or a prediction about what you think will happen in a study
or experiment. It’s based on what you already know and what you expect to find. A hypothesis
is not a fact—it's something you test to see if it's true or false.
How It Works:
• Step 1: You notice something and wonder why it happens or how it works.
• Step 2: You make a guess or prediction based on what you think the answer might be.
• Step 3: You test the guess through research or an experiment.
Example of a Hypothesis:
Imagine you want to find out if playing video games for an hour a day affects people's mood.
• Your observation: You notice that your friend seems happier after playing video
games.
• Your hypothesis (educated guess): "I think playing video games for an hour a day
makes people feel happier."
• Testing the hypothesis: You could ask a group of people to play video games for an
hour each day for a week, and then measure their mood before and after. If their mood
improves, your hypothesis might be correct. If not, it could be wrong.
Think of it like this:
Imagine you want to know if drinking more water makes people feel more energetic. Before
you do an experiment, you might have a guess about it. That guess is your hypothesis.
Example:
Let’s say you think that drinking water makes people feel more awake and energetic.
• Your guess (hypothesis): "Drinking more water will make people feel less tired."
Now, you do an experiment to test your hypothesis. You ask a group of people to drink more
water for a week, and then you check if they feel less tired. If they do, your guess was right! If
they don’t, your guess was wrong.
Summary:
A hypothesis is a prediction you make before starting a study or experiment. It's like saying,
"I think this will happen," and then testing to see if you were right. For example, "I think
playing video games makes people happier" is a hypothesis that you can test to find out if it's
true.
TYPES
1. Null Hypothesis (H₀)
The null hypothesis is a prediction that nothing significant is happening. It suggests that any
observed effect is due to chance or random factors rather than a real relationship.
The null hypothesis is saying nothing special is happening.
It’s like saying, “I don’t think this will make a difference.”
Example:
Suppose you’re studying whether eating fruits helps improve people's skin health.
• Null Hypothesis (H₀): "Eating fruits does not improve skin health."
• This means that eating fruits has no effect on skin health, and if you see any changes
in people's skin, it's just by random chance, not because of eating fruits.
2. Alternative Hypothesis (H₁ or Ha)
The alternative hypothesis is the opposite of the null hypothesis. It suggests that something
is happening—that there is a real effect or relationship. It’s what you’re trying to prove with
your research.
The alternative hypothesis is the opposite of the null hypothesis. It says something is
happening.
It’s like saying, “I believe this will make a difference.”
Example:
If you’re studying the same topic—fruits and skin health—your alternative hypothesis would
be:
• Alternative Hypothesis (H₁): "Eating fruits does improve skin health."
• This suggests that eating fruits does have a positive impact on skin health.
3. Directional Hypothesis
A directional hypothesis goes a step further by predicting how something will happen or the
direction of the effect. It’s more specific about the nature of the relationship.
A directional hypothesis predicts not just that something will happen, but also what
direction it will go.
It’s like saying, “I believe this will improve (or make worse) something.”
Example:
Let’s say you're studying the effect of exercise on mood:
• Directional Hypothesis: "Exercising for 30 minutes every day will increase
happiness."
• Here, you're predicting that exercise will have a positive effect on mood. You’re clearly
saying that exercise improves happiness.
4. Non-Directional Hypothesis
A non-directional hypothesis only predicts that there will be an effect, but it does not specify
the direction (positive or negative). It just says something will happen, but not exactly what.
A non-directional hypothesis predicts that something will happen, but it doesn’t say how
it will happen.
It’s like saying, “I believe there’s a relationship, but I don’t know if it’s good or bad.”
Example:
Now, let’s consider studying the relationship between sleep and academic performance:
• Non-Directional Hypothesis: "The amount of sleep a person gets will affect their
academic performance."
• This doesn’t say whether more sleep will improve performance or if less sleep will
hurt performance—it just predicts that sleep affects academic performance in some
way.
5. Research Hypothesis
The research hypothesis is another name for the alternative hypothesis. It’s a statement that
the researcher believes is true and wants to test. It’s basically what you expect will happen.
A research hypothesis is just another name for the alternative hypothesis. It’s what you
think will happen in your study.
It’s like saying, “This is what I believe will happen, and I’m going to test it.”
Example:
Let’s say you're researching how social media use affects teenagers' self-esteem.
• Research Hypothesis: "Using social media lowers teenagers' self-esteem."
• This is your belief or prediction about what will happen, and you will design your
research to test if this is true.
1. Qualitative Research
Qualitative research is about understanding experiences, feelings, and opinions. It focuses
on describing things in a detailed, non-numerical way. Researchers use words instead of
numbers to explore the "how" and "why" of things.
• Goal: To understand how people feel, think, or experience something.
• Data Type: Descriptive, in words or pictures.
• Methods Used: Interviews, focus groups, case studies, observations.
• Focus: Depth of understanding.
Example of Qualitative Research:
Let’s say you're studying how people feel about a new park in the city.
• You interview a group of people and ask, "How do you feel when you visit the park?"
• You get answers like: "I feel relaxed," "It’s peaceful," "I love the trees," etc.
• The researcher collects these personal stories to understand people's feelings about
the park.
2. Quantitative Research
Quantitative research is about measuring things using numbers. It focuses on counting or
measuring data to find patterns or relationships. The goal is to quantify information and
often test theories or hypotheses using numbers.
• Goal: To measure or count something and see how often or how much it happens.
• Data Type: Numerical (numbers).
• Methods Used: Surveys with multiple-choice questions, experiments, statistical
analysis.
• Focus: Measuring and generalizing results.
Example of Quantitative Research:
Let’s say you want to study how often people visit the new park in the city.
• You send out a survey with questions like: "How many times a week do you visit the
park?" and "How long do you stay each time?"
• You gather answers like: "5 visits per week," "2 hours per visit," etc.
• The researcher then counts the total number of visits to determine how popular the park
is, and uses the numbers to draw conclusions about the park’s usage.
1. Choose a Topic
The first thing you need is a topic you want to research. This is the broad area you're interested
in exploring. In psychology, your topic should be something that can be studied scientifically—
this means you should be able to measure it in some way.
Example:
You’re curious if listening to music helps people focus better when studying. So, you choose
concentration and music as your topic.
2. Formulate a Hypothesis
A hypothesis is like an educated guess about what will happen in your study. It’s a prediction
based on what you know or think might be true.
• The hypothesis should be clear and testable. It should state what you expect the
relationship between variables to be. A variable is anything that can change in an
experiment (e.g., the type of music or the level of concentration).
Example:
You predict that listening to music while studying will increase concentration.
So, your hypothesis could be:
"Students who listen to music while studying will have higher test scores on a concentration
test than students who study in silence."
3. Design the Study
Once you have your hypothesis, you need to design the experiment. This means planning
exactly how you’re going to test your hypothesis.
• You’ll decide how to measure concentration (e.g., by using a concentration test, a
performance task, or self-report questionnaires).
• You’ll figure out who will participate, how many people you need, and how you will
divide them into different groups.
• You need to decide on the control variables (things you’ll keep the same for everyone),
like the amount of time spent studying or the type of material studied.
• You’ll also decide on your independent variable (the thing you change, like whether
they listen to music or not) and your dependent variable (the outcome you’re
measuring, like concentration levels).
Example:
• Participants: You choose 30 students.
• Groups: 15 students study with music, 15 students study in silence (control group).
• Materials: A concentration test (like solving math problems or memory tasks).
• Control Variables: All participants study for 30 minutes using the same study material.
4. Collect Data
This is the phase where you actually conduct the experiment. You get the participants, follow
the experiment plan, and collect the data.
• You’ll record the results of each participant, making sure to stay organized and accurate.
• Data could include things like the scores participants get on the concentration test or
their feedback about how they felt while studying.
Example:
• Participants in the music group listen to music while studying, and participants in the
control group study in silence.
• After 30 minutes, everyone takes the concentration test, and you record their scores.
5. Analyze the Data
Once the data is collected, it’s time to analyze it. This means looking for patterns in the data
to determine if the music group performed differently from the control group.
• You might use statistics to analyze the data. Statistical tests can tell you if any
differences are statistically significant (meaning they’re likely not just due to chance).
• You could also use graphs or charts to make the data easier to understand.
Example:
• After analyzing the scores, you might find that the music group scored an average of
85%, while the silent group scored 75%.
• You could run a statistical test (like a t-test) to see if this difference is significant (i.e.,
is it likely that music really had an effect on concentration?).
6. Draw Conclusions
This step is about interpreting the data and deciding whether your hypothesis was correct.
• Conclusion: You need to say whether the results of your experiment support your
hypothesis or not. If your hypothesis is supported, great! If not, you may have to think
about why things turned out differently than expected.
Example:
• If the music group scores higher and the difference is statistically significant, you would
conclude that listening to music seems to improve concentration while studying.
• If there’s no significant difference between the two groups, you might conclude that
music doesn’t impact concentration as you thought.
7. Report Findings
The final step is to share your results with others. This could involve writing a research paper
or report, giving a presentation, or publishing your findings. The goal is to communicate what
you found, how you found it, and why it matters.
• Your report should include the introduction (why the topic is important), the method
(how the study was conducted), the results (what the data showed), and the discussion
(what your results mean).
• If you found something interesting, you might suggest areas for future research or ways
the results could be applied.
Example:
• You would write a report explaining the purpose of the study (to see if music improves
concentration), how you conducted the experiment (two groups, music vs. silence),
what your results were (the music group did better), and what that means (music could
be a useful tool for improving focus).
In Summary:
• Step 1: Choose a Topic – Pick a research question that interests you.
• Step 2: Formulate a Hypothesis – Make a testable prediction about the outcome.
• Step 3: Design the Study – Plan how you will test the hypothesis.
• Step 4: Collect Data – Conduct the experiment and record results.
• Step 5: Analyze the Data – Look for patterns and test for significance.
• Step 6: Draw Conclusions – Decide if your hypothesis was supported or not.
• Step 7: Report Findings – Share your results with others and suggest future research.
This process helps you gather solid, reliable information about human behavior or mental
processes.
distinguish between age and grade norms
1. Age Norms
Definition:
Age norms are standards or expectations that are based on the typical development or
performance of individuals in a certain age group. These norms reflect the average or typical
behaviors, skills, or abilities that most people of a certain age exhibit.
• Purpose:
Age norms are used to understand what is typical for children or adults at different
stages of life. Researchers use these norms to compare individual performance to the
average for their age.
• How it's Used:
In psychological research, age norms are used to assess if someone is developing at a
typical rate. For example, researchers may want to know if a child is performing at a
developmental level that’s typical for their age in areas like motor skills, language,
intelligence, or social behavior.
• Example:
Imagine a study testing memory ability in children. A researcher may use age norms to
see how a 7-year-old child compares to other 7-year-olds in memory tasks.
For instance, if the average 7-year-old can remember 8 words after a short delay, then
a 7-year-old who can also remember 8 words would be considered to be performing
within the age norm. But if the child remembers only 3 words, they might be
performing below the typical ability for their age.
• Why Age Norms Matter:
They help psychologists and educators know if a child is developing skills at a typical
pace compared to others their age. If a child is significantly ahead or behind age norms,
it might be an indicator that the child has a unique strength or challenge.
2. Grade Norms
Definition:
Grade norms are standards or expectations based on the performance of individuals in the same
school grade, rather than just the same age. This means that children who are in the same
grade (such as 1st grade, 3rd grade, etc.) are compared to each other, even if their ages vary
slightly.
• Purpose:
Grade norms are particularly useful in school settings. Since children in the same grade
may have a wide range of ages (e.g., a 6-year-old and an 8-year-old may both be in 1st
grade), grade norms focus on what is expected for children of the same educational
level. This helps to evaluate a student's academic performance relative to their peers in
the same grade.
• How it's Used:
In educational research, grade norms help teachers, schools, and psychologists
understand if a student’s performance in subjects like reading, math, or writing is typical
for their grade. Grade norms can also show if a child is excelling or struggling compared
to others in the same academic environment.
• Example:
Imagine a researcher is testing reading skills of 2nd graders. In this case, grade norms
would focus on what is expected from students who are in 2nd grade, regardless of
whether they are younger or older.
Let’s say the average reading score for 2nd graders is 85%. A 7-year-old child who is
in 2nd grade, and scores 85%, is performing at a grade-appropriate level. If a different
7-year-old in 2nd grade scores 70%, their performance would be considered below
grade norms for 2nd graders, even though their age might be typical for that grade.
• Why Grade Norms Matter:
Grade norms help evaluate how well students are performing in school in relation to
their peers. If a student is doing much better or much worse than their classmates, grade
norms help identify if they may need additional support or if they are ready for more
advanced work.
Based on Age (e.g., 6 years old, 7 years old) School grade (e.g., 1st grade, 2nd grade)
To compare development or
To compare academic performance with
Purpose performance with peers of the same
peers of the same grade
age
Narrower age group, usually a year Wider age range because children in the
Age Range
or two apart same grade may have different ages
2. Systematic Sampling
What it is:
In this method, you select every nth person from a list. The first person is selected randomly,
and then you pick every “nth” person from the list.
How it works:
You choose a starting point randomly, and then you select every 3rd, 5th, 10th, etc., person on
the list.
Example:
Let’s say you want to pick 5 students from a class of 30. First, you randomly pick a student,
and then you select every 6th student after that (e.g., pick the 2nd, 8th, 14th, 20th, and 26th
students).
3. Stratified Sampling
What it is:
This method divides the population into subgroups (called strata) based on a certain
characteristic (like age, gender, or grade). Then, you randomly select participants from each
subgroup.
How it works:
First, you divide the population into different groups, then randomly choose from each group
to make sure all groups are represented.
Example:
If you are studying student satisfaction in a school with 100 students (50 boys and 50 girls),
you would divide them into two groups: boys and girls. Then, you randomly select an equal
number of boys and girls to participate in your survey, ensuring that both groups are
represented.
4. Cluster Sampling
What it is:
In cluster sampling, you divide the population into groups or clusters, and then randomly select
some of these clusters. After that, you collect data from everyone within the chosen clusters.
How it works:
Instead of selecting individuals randomly, you select entire groups (clusters) randomly and
gather data from all members of the selected clusters.
Example:
Imagine you want to survey students in different schools across a city. Instead of randomly
picking individual students from all schools, you randomly choose a few schools (clusters) and
then survey all the students in those selected schools.
Type of
How it Works Example
Sampling
Simple Random Every person has an equal chance of Drawing names randomly from a
Sampling being selected. hat.
Systematic Start with a random person, then pick Selecting every 6th student from
Sampling every nth person. a list of 30.
Non-Probability Sampling
is a method where participants are not selected randomly. Instead, the selection is based on
the researcher’s choice or convenience. This means that not everyone has an equal chance of
being selected, and this can sometimes lead to bias. However, non-probability sampling is
quicker, cheaper, and easier to conduct than probability sampling.
Non-probability sampling is a method where participants are not chosen randomly, and not
everyone has an equal chance of being selected. It's easier and quicker but can sometimes lead
to bias. Here are the main types of non-probability sampling with simple examples:
1. Convenience Sampling
• What it is: The researcher selects participants who are easiest to access or who are
nearby. This method saves time and effort but may not give a representative sample.
• Example:
A teacher wants to survey students about their online learning experience. Instead of
selecting students randomly from the entire school, they just ask the students in their
own class. This is convenient, but it may not represent all students in the school.
2. Judgmental (Purposive) Sampling
• What it is: The researcher selects participants based on their judgment or because they
have specific characteristics that are important for the study.
• Example:
A researcher wants to study the experience of elderly people who use smartphones.
They will purposefully choose older adults who already use smartphones for the study,
because these participants are relevant to the research question.
3. Snowball Sampling
Snowball Sampling is a non-probability sampling technique that is used when it is difficult
to find or reach specific people. In this method, existing participants refer the researcher to
other potential participants. It’s called “snowball” sampling because as more people are
recruited, the group of participants keeps growing, much like a snowball rolling down a hill
and getting bigger.
How it Works:
1. The researcher starts with one participant who meets the criteria for the study.
2. After collecting data from the first participant, the researcher asks them to recommend
others who also fit the criteria.
3. These new participants are then asked to refer more people, and the process continues,
creating a "snowball effect."
Example of Snowball Sampling:
Let’s say a researcher wants to study people who have experienced a rare mental health
disorder, and they know it’s difficult to find these people because the condition is not
common.
1. Step 1: The researcher starts with one person who has this rare mental health condition
and interviews them to learn about their experiences.
2. Step 2: After the interview, the researcher asks the first participant if they know anyone
else who also has this condition and would be willing to participate in the study.
3. Step 3: The first participant gives the names of other individuals with the same
condition. The researcher then interviews these new participants.
4. Step 4: After interviewing the second person, the researcher asks them for more
referrals, and the process repeats.
Through this method, the researcher is able to gather a larger sample of people with the rare
mental health condition even though they may be hard to find. The sample "snowballs"
because each participant helps recruit more participants.
Why Use Snowball Sampling?
• Hard-to-Reach Populations: It’s especially useful when studying hidden or hard-to-
reach groups, such as people with rare diseases, illegal drug users, or individuals from
specific social groups.
• Trust and Rapport: People who share similar experiences might be more willing to
participate if someone they know has already participated, creating a sense of trust and
comfort.
Example in Real Life:
• Researching Homelessness:
Imagine a researcher studying homelessness in a city. It’s hard to find homeless people
because they may not be in one place, or they may be distrustful of researchers. The
researcher starts by interviewing one homeless individual, and that person might refer
the researcher to others they know who are also homeless. This process continues,
helping the researcher find more participants.
4. Quota Sampling
• What it is: The researcher ensures that certain groups (based on characteristics like age,
gender, etc.) are represented in the sample. Once the required number of participants
from each group is selected, the process stops.
• Example:
A researcher wants to make sure that a survey about political opinions includes both
men and women. They decide to select 50 men and 50 women for the study. The
researcher stops selecting participants once the quotas (50 men and 50 women) are
filled.
Convenience Selects participants who are Surveying students in your own class
Sampling easiest to access. because they are easy to reach.
These methods are quick and easy, but because participants are not randomly selected, the
results might not be as reliable or generalizable to the whole population.
2. Construct
• What it is: A construct is a more specific version of a concept that has been carefully
defined in a way that makes it possible to measure or observe. It’s a concept that has
been turned into something operational or measurable.
• Example:
If we use "happiness" as a concept, a construct of happiness could be how often
someone smiles, or how they rate their mood on a scale from 1 to 10. Now, the concept
(happiness) is turned into something measurable.
• Key Point: Constructs are defined so they can be measured or tested in research. They
make abstract concepts easier to study.
Simple Comparison:
Concept Construct
Cannotbedirectly
Can be measured in specific ways.
measured.
Summary:
• Concept = A broad idea or topic (like happiness or intelligence).
• Construct = A specific, measurable version of a concept (like smiling frequency to
measure happiness or IQ score to measure intelligence).
In research, we start with concepts and turn them into constructs to study and measure them
effectively.
Defining an operational definition
What is an Operational Definition?
An operational definition is a clear, specific description of how a concept or construct will
be measured or observed in a research study. It explains exactly what the researcher means
by a certain term and how they will measure it.
In simple words, it’s like giving clear instructions on how to turn a broad idea (like happiness,
intelligence, or stress) into something measurable that can be tested in research.
Example of Operational Definition:
• Concept: "Happiness"
• Operational Definition: "Happiness will be measured by asking participants to rate
their mood on a scale from 1 to 10, where 1 means 'very unhappy' and 10 means 'very
happy'."
Here, "happiness" is a broad idea (a concept), and the operational definition specifies exactly
how to measure it (by using a mood rating scale).
Summary:
An operational definition helps researchers define abstract concepts in a clear, specific, and
measurable way. It is crucial because it ensures clarity, makes research consistent and
replicable, and turns vague ideas into measurable data, which is essential for scientific
studies.
Pseudoscience:
• What it is: Pseudoscience looks like science, but it isn’t based on real facts or
evidence. It's more about beliefs or ideas that are not tested in a proper way.
• How it works: Pseudoscience doesn’t use real experiments or evidence. It may claim
to know the truth, but it doesn’t have solid proof, and its ideas don’t change even when
new facts are discovered.
• Example:
Think about astrology. Astrology says that the stars and planets can control your life
and personality. But there’s no real evidence or proof that this is true. It’s just a belief,
and it doesn’t change based on real testing or facts.
Key Differences:
Science Pseudoscience
Ideas are tested and proven Ideas are not tested properly
Changes when new facts are found Doesn’t change, even if facts prove it wrong
In short:
• Science uses facts and testing to find truth and is open to change when new facts come
up.
• Pseudoscience looks like science but doesn’t have real proof or experiments to back
up its claims. It stays the same, even when facts show it might be wrong.
Consequences of Science:
1. New discoveries that improve life (medicine, technology).
2. Better decisions based on facts and evidence.
3. Progress and improvement over time.
4. Trustworthy and reliable for society.
Consequences of Pseudoscience:
1. Wasted time and money on false ideas.
2. Health risks from unproven treatments.
3. False beliefs leading to confusion.
4. Slows progress and keeps people from discovering the truth.
2. Validity:
• What it is: Validity refers to how well a test actually measures what it is supposed to
measure. If a test is valid, it means that it measures exactly what it claims to measure,
not something else.
• Why it matters: A valid test accurately reflects the true concept it is intended to
measure. If a test is not valid, then even if it gives consistent results (reliable), those
results are not meaningful because the test isn't measuring the right thing.
• Example: Let’s say you are taking a math test. The test should measure your math
skills (like problem-solving and calculations). If the test includes many history
questions, even if you consistently do well, the test is not valid because it is not
measuring math skills; it is measuring your knowledge of history.
o Valid example: A test designed to measure intelligence should focus on
questions that actually measure problem-solving ability, reasoning, and
memory, not just general knowledge or luck.
• Conclusion: Validity is about whether the test or measurement measures what it is
supposed to measure.
3. Norms:
• What it is: Norms refer to the average performance or standard of a large group of
people who have taken the same test. These norms are used to compare an individual’s
score with the scores of others.
• Why it matters: Norms help you understand how well you did in comparison to other
people. Without norms, it's hard to know whether your score is good or bad.
• Example: Imagine you take a math test in school. Your score is 85 out of 100. To know
if this is a good score, you would compare it to the norms—which are the average
scores of all the students who took the same test.
o If the average score (norm) is 60 out of 100, then your score of 85 is above
average.
o If the average score is 90, your score of 85 is below average.
Norms help you compare your performance to others who took the same test.
• Conclusion: Norms provide the average score or standard used to compare an
individual's results with others.
Summary:
Consistency of results
Ensures that the test gives the A weighing scale giving the
Reliability when the test is
same results every time. same weight every time.
repeated.
The test measures Makes sure that the test is A math test measuring math
Validity what it is supposed to truly measuring what it skills, not history
measure. claims to. knowledge.
The average scores of Helps compare your score to Comparing your math test
Norms a group used for others and see if it’s good or score with the average
comparison. bad. score of the class.
In short:
• Reliability: Consistency of results.
• Validity: Accuracy of what is being measured.
• Norms: Average scores used to compare individual performance.
These three concepts help ensure that the tests and measurements used in research or
education are useful, accurate, and fair.
Standardization of a Test:
Standardization is the process of making sure a test is fair and consistent for everyone who
takes it. This means all test-takers have the same instructions, conditions, and scoring methods.
Key Points about Standardization:
• Same Conditions: Every person who takes the test should have the same experience,
such as the same time limit, environment, and instructions.
• Same Scoring: The test should be scored in a clear, consistent way, so that everyone is
judged fairly.
• Test Norms: The results should be compared to the average scores of a large group of
people to ensure fairness.
Example:
Think about a school exam. If everyone takes the test in the same time frame, with the same
questions, and the same grading system, then it is standardized. This ensures that the test is
fair and that people are judged equally.
Principles of Good Research:
Mnemonic: "Ready Researchers Go Ethical & Safe!"
• Ready = Replicability (Research should be repeatable)
• Researchers = Reliability (Consistency in results)
• Go = Good research (Accurate, clear, and fair)
• Ethical = Ethical considerations (Respect and fairness for all participants)
• Safe = Systematic approach (Organized steps)
Fun Example:
Think of a team of explorers:
• They’re ready to test the experiment over and over in different places to check if it
works again (replicability).
• Their research is reliable, so they get the same results no matter where they go.
• They follow good practices to ensure the experiment is clear and accurate.
• They are ethical, treating everyone and everything with care.
• They have a safe, organized plan, so no one gets lost during the experiment!
Principles of Good Research:
Good research follows certain principles to ensure it is accurate, fair, and useful. These
principles guide how to design and conduct research in a reliable way.
a. Objectivity:
• What it means: Research should be unbiased and based on facts, not personal opinions
or feelings.
• Example: If a researcher is studying how diet affects health, they should only look at
the facts, not let personal opinions about certain diets influence the results.
b. Replicability:
• What it means: Good research should be repeatable. This means that if other
researchers do the same study, they should get the same or similar results.
• Example: If one researcher studies the effects of a new medicine and gets results, other
researchers should be able to follow the same steps and get similar results.
c. Validity:
• What it means: The research must measure what it is supposed to measure. It should
be clear and accurate in its approach.
• Example: If the research is about the effects of stress on health, it should measure stress
correctly, using reliable methods like surveys or heart rate measurements, not guessing
or making assumptions.
d. Ethical Considerations:
• What it means: Research should be ethical and fair, treating people and animals with
respect. It should not harm anyone.
• Example: If researchers are studying human behavior, they must get consent from
participants and ensure their privacy is respected.
e. Systematic Approach:
• What it means: Research should be organized and follow a clear, step-by-step process.
This helps gather information in an orderly way.
• Example: In a study on how exercise affects sleep, researchers should follow a specific
plan: choose participants, give them exercise routines, track their sleep, and analyze the
results.