CH8: Recruitment and Selection Recruitment: Desirable Characteristics of Recruiters
CH8: Recruitment and Selection Recruitment: Desirable Characteristics of Recruiters
Recruitment
The process of encouraging potentially qualified applicants to seek employment with a
particular company
• Affected by the organization’s reputation and organizational values
Efficiency of selection system is limited by effectiveness of recruitment
Many sources can be utilized:
• Newspaper classifieds, Internet (including aesthetic and usability factors of the
organization’s website), newsletters, campus career centers, employee referrals
Transition to Selection
• Involves making applicants aware of
• Next steps in hiring process
• Selection methods used and instructions
• Expectations and requirements
Nature of Predictors
Test – Systematic procedure for observing behavior and describing it with the aid of
numerical scales
• Tests = predictors = assessments
Content
• Sign: A predisposition thought to relate to performance (e.g., personality)
• Sample: Observing behavior thought to relate to performance
• Criterion: Actual measure of prior performance
Form
• Speed vs. power: How many versus what level
• Paper / pencil vs. performance: Test in writing or in behavior
• Objective vs. essay: Much like multiple-choice vs. essay course exam questions
• Oral vs. written vs. computer: How data are obtained
• Individual vs. Group
• Aptitude vs. Proficiency (future potential vs. current level)
Measurement
• Measurement – Assignment of numbers to objects or events in such a way as to represent
specified attributes of the objects
• Attribute – Dimension along which individuals can be measured and along which they vary
• Measurement Error – things that can make measurement inaccurate
• Because of measurement error, we must carefully consider two important measurement
concerns:
o Reliability
o Validity
Reliability
The consistency or stability of a measure
• It is imperative that a predictor be measured reliably
• Unsystematic measurement error renders a measure unreliable
• We cannot predict attitudes, performance, or behaviors without reliable
measurement – limit to validity
Test-Retest Reliability
• Test-Retest – reflects consistency of a test over time
• Stability coefficient
• Administer test at time 1 and time 2 and see if individuals have a similar rank
order at both time 1 and time 2
Inter-Rater Reliability
Inter-Rater – extent to which multiple raters or judges agree on ratings made about a
person, thing or behavior
• Examine the correlation between ratings of two different judges rating the same
person
• Helps protect against interpersonal biases
Validity
Construct Validity – extent to which a test measures the underlying construct it was
intended to measure
• Construct – abstract quality that is not observable and difficult to measure
• Self-esteem, intelligence, cognitive ability
Validity – Overview
• Two types of evidence used to demonstrate Construct Validity
• Content Validity – degree to which a test covers a representative sample of quality being
assessed
o Not established in a quantitative sense
• Criterion-Related Validity – degree to which a test is a good predictor of attitudes,
behavior, or performance
• Face Validity – degree to which a test appears valid at face value (looking at it)
o Not established in a quantitative sense
Approaches to Criterion-Related Validity
• Predictive Validity – extent to which scores obtained at one point in time predict criteria at
some later time.
o GREs, GPAs, research experience, etc. predicting success in graduate school
• Concurrent Validity – extent to which a test predicts a criterion that is measured at same
time as test
o Want to see if newly developed selection tests predict performance of current
employees
Predictive Designs
• Gather predictor data on all of the applicants.
• Some of the applicants would be hired to fill the open positions – based on predictors that
are not part of our selection battery.
o If we hire only high scorers on the new predictors, then we will not be able to examine
if low scorers are unsuccessful on the job.
• After months on the job, we gather performance data, which serve as the criteria.
• A validity coefficient is computed between the predictor score and the criterion score that
indicates the strength of the relationship.
Concurrent Designs
• Data on both predictors and criteria are collected from incumbent employees at the same
time.
• A validity coefficient is computed between the predictor score and the criterion score which
indicates the strength of the relationship
Components of Construct Validity
• Convergent Validity –Measure is related to other measures of similar constructs
• Divergent Validity – Measure is not related to measures of dissimilar constructs
• These are demonstrated by using concurrent and/or predictive validity designs
Types of Predictors
Types of Interviews
• Structured – Series of job analysis-based questions which are asked of all job candidates in
the same way and scored on the same scale
o Increases the reliability of the process and more fair comparison of applicants
• Unstructured – Constructed without thought to consistency in questioning
o Not as useful as Structured because of lack of consistency
o May provide more detailed information by leading discussion in a new direction that
may not have been planned in advance
• Purpose: provide information about the applicant to the interviewer, about the job and
organization to the applicant, create positive attitude toward the organization
Structured Interviews
• Questions based on job analysis
• Same questions asked of each candidate
• Detailed anchored rating scales used to numerically score each response
• Detailed notes taken, focusing on interviewees’ behaviors
• Situational - Assess applicant’s ability to project his / her behaviors to future situations.
Assumes the person’s goals/intentions will predict future behavior (validity averages .35)
• Experience-based - Assess past behaviors that are linked to prospective job. Assumes past
performance will predict future performance (validity averages .28)
• Train interviewers
Final selection
• HR compiles the files of the candidates above threshold – finalists (either ranked or not)
• Manager of the department with the job opening makes final decision
• Rejecting applicants: be honest, be polite
o Maintain the person’s self-concept
o Maintain goodwill towards the organization
o Let the person know they are rejected in unambiguous terms