0% found this document useful (0 votes)
4 views

Samenvatting Methodenleer

Psychologie blok I - Tilburg University

Uploaded by

r.c.vandriel
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Samenvatting Methodenleer

Psychologie blok I - Tilburg University

Uploaded by

r.c.vandriel
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

METHODENLEER

Boek: Research Methods in Psychology


SAMENVATTING HOOFDSTUK 1
Chapter 1: Psychology is a way of thinking
Psychologists = Empiricists
- Basing one’s conclusions on systematic observations
- Use evidence from the senses or from instruments that assist senses

Research consumers = Read about research so they can apply it


- Needed to read with curiosity (to tell whether it is high-quality)
- Could be crucial to your career
 Evidence-based treatments (therapies supported by research)
Research producers = Observe or analyse data, write up results and present
them
- Psychologists engage in both roles
- Share commitment to empiricism
 Answer questions with observations and communicate about learnings

5 ways psychologists approach their work:


1. Act as empiricits in their investigations
2. Test and revise theories through research
3. Follow norms in scientific community that prioritize objectiveness
4. Take an empirical approach to research
5. Make their work public

Theory-Data cycle = Scientists collect data to test, change or update theories.


 Asking specific questions and making predictions

Why do animals form such strong attachment?


Cupboard theory of mother-infant attachment:
- Mother is valuable because she is the source of food/cupboard
Contact comfort theory (Harry Harlow)
- Babies are attached to their mothers because of the comfort of their warm
fuzzy fur.
Conclusion: Harlows experiment with monkeys proved the contact comfort theory right

Theory = Set of statements that describes general principles about how variables
relate to one another.
Hypothesis = Prediction
Data = Set of observations, can match hypothesis

A study does not prove a theory


A study’s data support or are (in)consistent with a theory

Weight of evidence = Collection of studies of the same theory

Falsifiability = A theory should lead to hypotheses thath could fail to support the
theory.

Scientific norms (according to Robert Merton)


1. Universalism: Scientific claims are evaluated according to their merit,
independent of the researchers reputation. (= Er wordt dus niet gekeken
naar de afkomst van de onderzoeker of naar wat deze al bereikt heeft)
2. Communality: Scientific findings belong to the community.
3. Disinterestedness: Scientists strive to discover the truth, not swayed by
profit etc.
4. Organized skepticism: Scientists question everything, including own
theories

Types of research:
Applied research:
- Done with a practical problem in mind.
- Researchers carry out their work in a real-world context
Basic research:
- Done to enhance the general body of knowledge
- The knowledge may be applied in real-world issues later on
Translational research:
- Use of lessons from basic research to develop en test applications to
treatment and intervention.
- From basic to applied research

Peer-reviewers = Experts on the specific subject of article


- Are kept anonymous
- Supposed to ensure that the articles contain important well-done studies
Journalism = A secondhand report about the research, written by journalists

Mozart-effect = Journalists sometimes misrepresent research findings


(Students listening to Mozart before taking a test, scored higher)

How can you prevent being misled by a journalist’s coverage of science?


1. Find the original source
2. Maintain a skeptical mindset when it comes to popular sources

SAMENVATTING HOOFDSTUK 2
Chapter 2: Sources of information: Why research is best and how to find it?
Experience no good source of information:
1. No comparison data
- In England surgeons believed a radical mastectomy was the only good
solution to treat breast cancer. They didn’t compare the results to results
of other treatments.
- Conclusion: Other treatments were just as good as a radical mastectomy.
2. Experience is confounded (alternative explanations)
- You cant determine which factor(s) lead to improvement.

Research > Experience:


Bushman’s experiment:
- Made a group of students angry by judging their essays.
- Divided them into 3 groups; 1 group sat in silence, 1 group punched a bag,
1 group punched a bag with the judges face on it.
 The judge was a confederate: An actor who is directed by the
researcher to play a certain role in a research study.
- Conclusion: The group who got the chance to punch the judge still felt very
angry.

Results of research are probabilistic: Its findings do not explain all cases all the
time, they explain a certain proportion of cases.
Intuition: Using our hunched about what seems natural/logic.
 Intuition can be biased
5 examples of biased reasoning:
1. Being swayed by a good story
- Accepting a conclusion just because it makes sense
2. Being persuaded by what comes easily to mind (Availability heuristic)
- E.g. assuming that there are a lot of people with pink hair, just because
they stand out.
3. Present/present bias: We often fail to look for absences, because it is easy
to notice what is present.
4. Confirmation bias: The tendency to look only at information that agrees
with what we want to believe. (Hear what you want to hear)
5. Bias blind spot: The tendency to believe that we ourselves aren’t biased,
but others are.

Scientific sources:
1. Journal articles
- Written for an audience of other psychological scientists and psychology
students
- Empirical articles: Report results of an empirical research study
- Review journal articles: Summarize and integrate all published studies that
have been done in one research area
 Use quantitative technique called meta-analysis: Combines results of
many studies and gives a number that summarizes the magnitude or
effect-size of a relationship.
2. Books and edited books
- Edited book: Different chapters written by different scientists

Turning your question into a good database search:


1. Figure out the best search term for your question
2. Once you find a suitable article, do another search
3. Adjust your search using ‘or’ and ‘and’

Good sources:
PsycINFO & Google Scholar

Content of an empirical journal article:


1. Abstract: Summary of the article
- Describes hypotheses, method and major results.
2. Introduction: Explains the topic of the study
3. Method: Explains how the researches conducted their study
- Participants, materials, procedure and apparatus.
4. Results: Results, including tests, tables and figures.
5. Discussion: Discuss the study’s importance
6. References: All the sources the authors cited

Read with a purpose, ask yourself:


What is the argument?
What is the evidence to support the argument?
How good is the study behind the story?
Is the story accurate?
Read critically, not cynically
Disinformation: The deliberate creation and sharing of information known to be
false.
Motives of disinformation:
- Propaganda, Provocation, Profit, Parody, Passion, Politics

SAMENVATTING HOOFDSTUK 3
Chapter 3: Three claims, Four validities: Interrogation tools for consumers of
research
Variable: Something that varies
- Must have two levels or values
Constant: Something that could potentially vary but that has only one level in the
study in question.

Measured variable: Levels are simply observed and recorded


 E.g. Hight or IQ
Manipulated variable: Controlled, usually by assigning study participants to the
different levels of that variable
 E.g. giving participants 10/20/30 mg. of medication

Some variables cant be manipulated, because:


- They can only be measured (age or IQ)
- It would be unethical to do so

Constructs/Conceptual variables (The name of the concept being studied) or


Operational variables
Operationalize: To turn it into a measured or manipulated variable

3 Claims: An argument someone is trying to make.


Frequency claims: Describe a particular rate or degree of a single variable.
- Claims that mention the percentage of a variable, number of people or a
certain group’s level on a variable.
- Focus only on one variable
Association claim: One level of a variable is likely to be associated with a
particular level of another variable.
- Are sometimes said to correlate/covary: when one variable changers, the
other tends to change as well.
- Use language like: link, associate, correlate, predict, tie to and be at risk
for
- Correlational study: Study in which the variables are measured and the
relationship between them is tested.

Positive association: High goes with high and low goes with low
- Can be represented with scatterplots
Negative/Inverse association: High goes with low and low goes with high
Zero association: There is no association at all
 Positive and negative associations can help make predictions

Causal claim: One of the variables is responsible for changing the other.
- Use language like: cause, enhance, affect, decrease and change

From language of association to language of causality:


1. Establish that the two variables are correlated.
2. It must show that the causal variable came first and the outcome variable
came later
3. Establish that no other explanations exist for the relationship.

Validity: The appropriateness of a conclusion or decision


 We do not say a claim is ‘valid’. We specify which of the validities
they’re applying.

Two big validities:


Construct validity: How well a conceptual variable is operationalized
- Researchers must establish that each variable has been measured reliably
and that different levels of a variable accurately correspond to true
differences.
External validity: How well the results of a study generalize to people or contexts
besides those in the original study.
- Generalizability: How did the researchers choose the participants and how
well do they represent the intended population.

Statistical (conclusion) validity: The extent to which a study’s statistical


conclusions (resultaten van een steekproef) are precise, reasonable and
replicable.
- Hoe goed worden de conclusies onderbouwd door de statistische analyses?
- Point estimate, usually a percentage
- Precision of that estimate is captured by the confidence interval (range
designed to include the true population value a high proportion of the
time) or a margin of error of the estimate (Improves with many estimates)

Internal validity: The extent to which other variables are responsible for changes
in any variable.
- Alleen relevant bij causale claims
- Was het echt A die leidt tot B, of was er een derde variabele?

Interrogating association claims:


Construct validity of association claims: Both variables should be well-measured.
External validity of association claims: Ask whether it can generalize to other
populations, contexts, times or places.
Statistical validity of association claims: How strong and precise the estimated
association is. It also considers other estimates of the same association.
 Strength: How strong is the estimated association
 Precision

Three criteria for establishing causation:


Covariance: As A changes, B changes as well
Temporal precedence: The study’s method ensures that A comes first in time
Internal validity: No plausible alternative explanations for the change in B

Experiment: Start of research where researchers manipulate the variable they


think is the cause and measure the variable they think is the effect.
Manipulated variable = Independent variable
Measured variable = Dependent variable

Random assignment: Making sure the participants are random, but do not differ
much.
Interrogating causal claims:
Construct validity of causal claims: How well was it measured?
External validity of causal claims: Can this sample generalize to children from
other countries, age etc.
 Not always possible to achieve
Statistical validity of causal claims: As about the precision of the estimate

SAMENVATTING HOOFDSTUK 4
Chapter 4: Ethical guidelines for psychology research
Historical examples of research:
Tuskegee Syphilis Study
- Men with a deadly illness were being observed until death.
- No beneficial treatment was given, only dangerous spinal tap procedure

Unethical choices divided in three categories:


1. Not treated respectfully
- The researchers lied to them about why they were participating
2. Harmed
- They had to endure painful tests and didn’t get treatment
3. A disadvantaged social group was targeted
- They were all African American

Milgram’s Obedience Study


- Participants were required to force the learners arm onto a electric plate.
- Shocks were going up in V until the learner didn’t respond anymore.
- Participants had to go on, because the teacher said so.
 Immediately after, the participants were debriefed: carefully informed
about the study’s hypotheses.

Ethical principles:
Nuremburg Code: Influences the ethical research laws (As a result of WW II)
Declaration of Helsinki: Guides ethics in medical research
Belmont Report: Defines ethical guidelines researchers should follow
Three main principles:
1. Respect for persons
Two provisions:
- Individuals should be treated as autonomous agents
- Every participant is entitled to the precaution of informed consent (Each
person learns about the project, considers its risks and benefits and
decides whether to participate)
 Coercion: Implicit/Explicit suggestion that those who do not participate
will suffer a negative consequence.
- Some people have less autonomy, so they are entitled to special protection
when it comes to informed consent.

2. Beneficence
- Researchers carefully assess the risks and benefits of the study they plan
to conduct.
- They must consider how the community might benefit or be harmed.
Anonymous study: Researchers don’t collect any potentially identifying
information.
Confidential study: Researchers do collect some identifying information, but
prevent it form being disclosed.

3. Justice
- Researchers consider the extent to which the participants involved in a
study are representative of the kinds of people who would also benefit
from its results.

APA: Outlines 5 general principles for guiding individual aspects of ethical


behaviour
 Intended to protect research participants and students or clients
1. Benificence: Treat people in ways that benefit them.
2. Fidelity and responsibility: Establish relationships of trust
3. Integrity: Strive to be accurate, truthful and honest
4. Justice: Strive to treat all groups fairly
5. Respect: Recognize that people are autonomous agents

APA has 10 ethical standards for psychologists


Standard 8 specifically addresses psychologists in their role as researchers
Institutional Review Boards (Standard 8.01):
- Committee responsible for interpreting ethical principles and ensuring that
research is conducted ethically.
- IRB panels include 5/more people
 One must be a scientist
 One must have academic interests outside the sciences
 One must be a community member who has no ties to the institution
- Researchers must submit a detailed application, which must be reviewed
by the IRB.

Informed Consent (Standard 8.02):


- Researchers obligation to explain the study to potential participants and
give them a chance to decide whether to participate.
- In certain circumstances not necessary
 E.G. anonymous questions, educational setting etc.

Deception (Standard 8.07)


- The withholding of some details of a study from participants (deception
through omission) or the act of actively lying to them (deception through
commission)
- Deceiving research participants by lying to them is in many cases
necessary in order to obtain meaningful results.

Debriefing (Standard 8.08)


- Talking session after the study, in which researchers describe the nature of
the deception and explain why it was necessary.

Research misconduct:
Data fabrication (Standard 8.10): Researchers invent data that fit their
hypotheses.
Data falsification: Researchers influence a study’s results.
 Diederik Stapel, TU, fired because he fabricated data in dozens of his
studies.
Openness and transparency:
- Two goals of psychological science violated by research misconduct
 Data has to be open
 Researchers have to report their process transparently

Plagiarism (Standard 8.11)


- Representing the ideas or words of others as one’s own
- Violation of ethics, because its unfair for a researcher to take credit for
another persons intellectual property.
- To avoid -> Cite the sources
- Researchers should also not self-plagiarize: repeat sentences of own
previous research

Animal research (Standard 8.09)


Legal protection for laboratory animals
- Psychologists who use animals must care for them humanely
- Must use as few animals as possible
- Must be sure the research is valuable enough to justify using animal
subjects
AWA (Animal Welfare Act)
- Outlines standards and guidelines for the treatment of animals
- Mandates that research institutions have a local board called the
Institutional Animal Care and Use Committee
 Must contain a veterinarian
 A practicing scientist who is familiar with the goals and procedures of
animal research
 A member of the local community who is unconnected with the
institution
Animal Care Guidelines and the Three Rs
- Replacement: Researchers should find alternatives to animals when
possible
- Refinement: Researchers must modify experimental procedures to
minimize or eliminare animal distress
- Reduction: Researchers should adopt experimental designs and procedures
that require the fewest animal subjects possible

Animal researchers defend their use of animal subjects with 3 primary


arguments:
1. Animal research has resulted in numerous benefits to humans and animals
2. Animal researchers are sensitive to animal welfare
3. Researchers have successfully reduced the number of animals they need
to use because of new procedures.

SAMENVATTING HOOFDSTUK 5
Chapter 5: Identifying Good Measurement
Conceptual definition of each variable: researcher’s definition of the variable in
question at a theoretical level.
Operational definition of each variable: researcher’s specific decision about how
to measure or manipulate the conceptual variable.

Operationalizing happiness:
1. Diener
- 5 questions about well-being and a good life
- 7-point scale; 1 corresponded to strongly disagree and 7 corresponded to
strongly agree
2. Gallup’s Ladder of Life
- Value between 0 and 10 about your personal life

Studying conceptual variables:


1. Stating a definition of the construct
2. Create an operational definition

3 Common types of measures


1. Self-report measures
- Operationalizes a variable by recording people’s answers to questions
about themselves in a questionnaire or interview.
- In research on children, these might be replaces with parent or teacher
reports.
2. Observational measures
- Operationalizes a variable by recording observable behaviors or physical
traces of behaviors.
- May record physical traces of behaviour.
3. Physiological measures
- Operationalizes a variable by recording biological data. (Such as brain
activity)
 With use of EMG or MRI’s, brain activity can be captured.

The level of operational variables can be coded using different scales of


measurement
Categorical VS. Quantitative variables:
- Categories: Male/Female
- Meaningful numbers: height and weight

3 Types of quantitative variables


Ordinal scale:
- When the numerals of a quantitative variable represent a ranked order
(volgorde)
 Ranking how fast students worked doesn’t show how much faster the
exam was turned in.
Interval scale:
- When the numerals of a quantitative variable meets two conditions
1. The numerals represent equal intervals between levels
2. There is no true zero (The 0 score does not literally mean nothing)
 For example an IQ-test
Ratio scale:
- When the numerals of a quantitative variable have equal intervals and the
value of 0 truly means ‘none’ of the variable being measured.
 How many people answer correctly on a test.

2 aspects of construct validity


Reliability: How consistent the results of a measure are
Validity: Whether the operationalization is measuring what it is supposed to
measure

3 types of reliability
Test-retest reliability: Study participant will get pretty much the same score each
time they are measured with it.
- Most relevant when researchers are measuring constructs that are
theoretically stable.
Interrater reliability: Consistent scores are obtained no matter who measures the
variable.
- Most relevant for observational measures
 Two different observers will come up with consistent findings
Internal reliability: Study participant gives a consistent pattern of answers, no
matter how the question is phrased.
- Applies to measures that combine multiple items.

Correlation coefficient: Single number, called r, to indicate how close the points
on a scatterplot are to a line drawn through them.
- Strong when points are close to the line; r is closer to -1 or 1
- Weak when dots are spread out; r is closer to the zero

Cronbach’s alpha: A correlation-based statistic that measures a scale’s internal


reliability.

Construct validity: Important when a construct is not directly observable


 Collecting a variety of data gives us evidence

Face validity: If a measure is subjectively considered to be a plausible


operationalization of the conceptual variable in question.
Content validity: A measure must capture all parts of a defined construct
 The definition of intelligence contains distinct elements. To have
adequate content validity, any operationalization of intelligence should
include questions to asses each of the seven components.
Criterion validity: Evaluates whether the measure under consideration is
associated with a concrete behavioral outcome that it should be associated with.
- Important for self-report measures, because correlation can indicate how
well people’s self-reports predict their actual behavior.
- Bij criterion validiteit kijk je of de metingen kloppen als je kijkt naar het
observeerbaar gedrag die samen zouden moeten hangen met het
onderzoek.
Known-group paradigm:
- Researchers see whether scores on the measure can discriminate among
two or more groups whose behaviour is already confirmed.

Convergent validity: An empirical test of the extent to which a self-report


measure correlates with other measures of a theoretically similar construct.
Discriminant validity: An empirical test of the extent to which a self-report
measure does not correlate strongly with measures of theoretically dissimilar
constructs.

A measure cannot be more valid than it is reliable


Reliability is necessary, but not sufficient for validity.
 Measure can be reliable, not valid, but it cannot be valid if its
unreliable.

SAMENVATTING HOOFDSTUK 6
Chapter 6: Surveys and Observations: Describing what people do
Survey/Poll: A method of posing questions to people online, in personal interviews
or in written questionnaires.
Several formats of survey questions:
Open-ended questions: Allow respondents to answer the way they like.
- Provide spontaneous, rich information.
- Responses must be coded and categorized, which is time-consuming.
Forced-choice questions: Respondents give their opinion by picking the best of
two or more options.
- Used in political polls or to measure personality.
- Yes/No questions are also forced-choice questions.
Likert0 (-type) scale: A statement
- People can response with: strongly agree, agree, neither agree nor
disagree, disagree and strongly disagree.
Semantic differential format: Rating a target object with a numeric scale that is
anchored with adjectives.
- Five star internet rating sites

The way questions are worded and the order in which they appear are more
important:
 Each question should be clear and straightforward.
 Asking a question both ways, the researchers can study the items’
internal consistency.

Leading questions: The wording leads people to a particular response.


Double-barreled question: Asks two questions in one.
Negatively worded questions: Negative phrasing, which causes confusion.

Different question order can influence people’s answers


Solution: Preparing different versions of a survey

Self-report pro’s:
- It can lead to meaningful answers about own experiences that can’t be
observed
Self-report con’s:
Response sets: Type of shortcuts people take when answering survey questions
 Fence sitting: Playing it safe by answering in the middle of the scale
 Solution: Taking away the neutral option.
Acquiescence: When people say ‘yes’ or ‘strongly agree’ to every item instead of
thinking carefully about each one.

Reverse-wording items: Changing the wording of some items to mean their


opposite.
- Might slow people down so they answer more carefully.
- Sometimes it results in negatively worded items, which are more difficult
to answer.

Socially desirable responding/Faking good: Respondents give answers that make


them look better than they really are.
- Often because they are shy or worried about giving an unpopular opinion.
Faking bad: The opposite

The Implicit Association Test: Asks people to respond quickly to positive and
negative words on the right and left of a computer screen.
- Intermixed with the words are instances of different social groups.
Self-reporting memories of events:
- Vividness and confidence are unrelated to how accurate the memories
actually are.
- People’s feelings of confidence in their memories do not inform us about
their accuracy.

Observational research: When a researcher watched people/animals and records


how they behave or what they are doing.
- Can be basis for frequency claims
- Can be used to operationalize variables in association claims and causal
claims.

How observational methods have been used to answer research questions in


psychology:
1: Mehl et al.
- Study participants wore a small recording device to measure how many
word they spoke per day.
- The difference between men and women is 3%
2: Franchack et al
- Investigation where babies and caregivers look.
- Babies spent much more time looking at toys than at the parent, whereas
caregives looked equally at the toys and their baby.
3: Campos et al
- Observing the emotional tone of parents and the topics of conversation
during dinner.
- Emotional tone around 4.2 (neutral), and kids expressed distaste at the
food.

Benefits of behavioral observation:


- Observations can tell a more accurate story than selfreporting.
- A good way to operationalize a variable.

Construct validity can be threatened by 3 problems:


1: Observer bias: When observers see what they expect to see
- Observers rate behaviour according to their own expectations or
hypothesis.
2: Observer effects: When participants confirm observer expectations
3: Reactivity: When participants react to being watched

How to prevent observer bias and effects:


- Develop clear instructions (codebooks), so observers can make reliable
judgments.
- Masked/Blind design: Observers are unaware of the purpose of the study
and the conditions to which participants have been assigned.
How to prevent reactivity:
1: Blend in
- Make unobtrusive observations, make yourself less noticeable.
2: Wait it out
- Wait till the people you’re observing get used to it and forget their being
watched.
3: Measure the behavior’s results
- Measure the traces a particular behavior leaves behind.
SAMENVATTING HOOFDSTUK 7
Chapter 7: Sampling: Estimating the frequency of behaviors and beliefs

Population: The entire set of people or products in which you are interested. (E.g.
a bag of chips)
Sample: Smaller set, taken from that population. (One chip)
 If you taste every sample, you would be conducting a census.

To create a good sample, we need to determine the population were interested in.
Just because a sample comes from a population, does not mean it generalizes to
that population.

Biased/Unrepresentative sample: Some members of the population of interest


have a much higher probability than other members of being included in the
sample.
Unbiased/Representative sample: All members of the population have an equal
chance.
A researcher’s sample might contain too many of the most unusual people.

A sample could be biased in 2 ways:


1: Sampling only those who are easy to contact (convenience sampling)
- People who participate in online research for payment
- When researchers are unable to contact an important group of people
2: Sampling only those who volunteer (self-selection)
- For example, people who rate items on internet review sites.

Probability/Random sampling: Every member of the population of interest has an


equal and known chance of being selected for the sample.
- Excellent external validity and can generalize to the population of interest.

Forms of random sampling:


1: Simple random sampling
- For example, a website that generates list of random numbers. The
randomizer determines which of the 5 individuals should be in the sample.
2: Systematic sampling
- A propability sampling technique in which the researcher uses a randomly
chosen number N, and counts off every Nth member of a population.
 These two can be difficult
3: Cluster sampling
- Clusters of participants within a population of interest are randomly
selected and then all individuals in each selected cluster are used.
4: Multistage sampling
- A random sample of clusters and then a random sample of people within
those clusters are chosen.
5: Stratified random sampling
- Researcher purposefully selects particular demographic categories and
then randomly selects individuals within each of the categories.
 All these methods can be combined

Stratified random sampling vs. cluster sampling


- Strata are meaningful categories, whereas clusters are more arbitrary.
- Final sample sizes of the strata reflect their proportion in the population,
whereas clusters are not selected with such proportions in mind.

Oversampling: Researcher intentionally overrepresents one or more groups.

Random: Occurring without any order or pattern


Random sampling: Researchers create a sample using some random method
 Enhances external validity
Random assignment: The use of a random method to assign participants into
different experimental groups.
 Enhances internal validity

Nonprobability sampling techniques: When samples are difficult to obtain


Convenience sampling: Sampling only those who are easy to contact
Purposive sampling: A biased sampling technique in which only certain kinds of
people are included in a sample.
Snowball sampling: Participants are asked to recommend a few acquaintances
Quota sampling: Researcher identifies subsets of the population of interest and
then sets a target number for each category.

When it comes to external validity of the sample, its how, not how many.

Nonprobability sampling: Involve non-random sampling and result in a biased


sample.

SAMENVATTTING HOOFDSTUK 8
Chapter 8: Bivariate correlational research
Association claim: Describes a relationship between two measured variables
Association verbs: Linked, have (not causes or makes happen)
Bivariate correlation/association: An
association that involves exactly two
variables.

Correlation is determined by using a


scatterplot to get the r.
R has two qualities:
Direction: Whether the association is
positive, negative or zero
Strength: How closely related the two
variables are
 The closer, the closer r will
be to 1 or -1
 A -r means more variable A,
less variable B and the
other way around.

Categorical variable: Its values fall in


either one category or another.
Quantitative variable: Its values are
quantitative.

Study with all measured variables =


Correlational
If one of the variables is manipulated
= Experiment, most likely causal

Bargraph: Used instead of a


scatterplot, when one variable is
categorical.

4 Validities in the context of


association claims:
Construct validity: How well was each
variable measured?
- Ask about the measures, do
they have good reliability, does
it measure what it’s intended
to measure etc.

Statistical validity: How well do the


data support the conclusion?
- How strong is the relationship between 2 or more variables? (Effect size)
 Larger effect size is often considered more important
 When a tiny effect size is aggregated over many people or situations, it
can have an important impact.
- How precise is the estimate?
 Statistically significant: When p < 0.5, when it is unlikely the result
came from the null-hypothesis population.
- Has it been replicated?
- Could outliers (Extreme scores) be affecting the association?
 Could be problematic because they may exert disproportionate
influence.
- Is there restriction of range (If there is not a full range of scores on one of
the variables in the association, it can make the correlation appear smaller
than it really is)?’
- Is the association curvilinear (Relationship between two variables is not a
straight line)

(5) What do researchers do when they suspect restriction of range?


- A study could obtain the true correlation between two variables.
- Use a statistical technique, correction for restriction of range.
 Estimates the full set of scores based on what we know.

To establish causation, a study must satisfy 3 criteria (chapter 3):


1. Covariance of cause and effect
2. Temporal precedence = Directionality problem
 We don’t know which variable came first
3. Internal validity = Third-variable problem
 When we can come up alternative explanation for the association
between two variables

Spurious association: The bivariate correlation is there, but only because of some
third variable.

External validity: To whom van the association be generalized?

Mooderator: When the relationship between to variables changes depending on


the level of another variable.

You might also like