learning and conditioning
learning and conditioning
Learning
By Mark E. Bouton
University of Vermont
Learning Objectives
Distinguish between classical (Pavlovian) conditioning and instrumental (operant)
conditioning.
Understand some important facts about each that tell us how they work.
Understand how they work separately and together to influence human behavior in the
world outside the laboratory.
Students will be able to list the four aspects of observational learning according to Social
Learning Theory.
We now believe that this same learning process is engaged, for example, when humans associate
a drug they’ve taken with the environment in which they’ve taken it; when they associate a
stimulus (e.g., a symbol for vacation, like a big beach towel) with an emotional event (like a
burst of happiness); and when they associate the flavor of a food with getting food poisoning.
Although classical conditioning may seem “old” or “too simple” a theory, it is still widely
studied today for at least two reasons: First, it is a straightforward test of associative learning that
can be used to study other, more complex behaviors. Second, because classical conditioning is
always occurring in our lives, its effects on behavior have important implications for
understanding normal and disordered behavior in humans.
In a general way, classical conditioning occurs whenever neutral stimuli are associated with
psychologically significant events. With food poisoning, for example, although having fish for
dinner may not normally be something to be concerned about (i.e., a “neutral stimuli”), if it
causes you to get sick, you will now likely associate that neutral stimuli (the fish) with the
psychologically significant event of getting sick. These paired events are often described using
terms that can be applied to any situation.
The dog food in Pavlov’s experiment is called the unconditioned stimulus (US) because it
elicits an unconditioned response (UR). That is, without any kind of “training” or “teaching,”
the stimulus produces a natural or instinctual reaction. In Pavlov’s case, the food (US)
automatically makes the dog drool (UR). Other examples of unconditioned stimuli include loud
noises (US) that startle us (UR), or a hot shower (US) that produces pleasure (UR).
Another example you are probably very familiar with involves your alarm clock. If you’re like
most people, waking up early usually makes you unhappy. In this case, waking up early (US)
produces a natural sensation of grumpiness (UR). Rather than waking up early on your own,
though, you likely have an alarm clock that plays a tone to wake you. Before setting your alarm
to that particular tone, let’s imagine you had neutral feelings about it (i.e., the tone had no prior
meaning for you). However, now that you use it to wake up every morning, you psychologically
“pair” that tone (CS) with your feelings of grumpiness in the morning (UR). After enough
pairings, this tone (CS) will automatically produce your natural response of grumpiness (CR).
Thus, this linkage between the unconditioned stimulus (US; waking up early) and the
conditioned stimulus (CS; the tone) is so strong that the unconditioned response (UR; being
grumpy) will become a conditioned response (CR; e.g., hearing the tone at any point in the day
—whether waking up or walking down the street—will make you grumpy). Modern studies of
classical conditioning use a very wide range of CSs and USs and measure a wide range of
conditioned responses.
Receiving a reward can condition you toward certain behaviors. For example, when you were a
child, your mother may have offered you this deal: "Don't make a fuss when we're in the
supermarket and you'll get a treat on the way out." [Image: Oliver
Hammond, https://goo.gl/xFKiZL, CC BY-NC-SA 2.0, https://goo.gl/Toc0ZF]
Although classical conditioning is a powerful explanation for how we learn many different
things, there is a second form of conditioning that also helps explain how we learn. First studied
by Edward Thorndike, and later extended by B. F. Skinner, this second type of conditioning is
known as instrumental or operant conditioning. Operant conditioning occurs when
a behavior (as opposed to a stimulus) is associated with the occurrence of a significant event. In
the best-known example, a rat in a laboratory learns to press a lever in a cage (called a “Skinner
box”) to receive food. Because the rat has no “natural” association between pressing a lever and
getting food, the rat has to learn this connection. At first, the rat may simply explore its cage,
climbing on top of things, burrowing under things, in search of food. Eventually while poking
around its cage, the rat accidentally presses the lever, and a food pellet drops in. This voluntary
behavior is called an operant behavior, because it “operates” on the environment (i.e., it is an
action that the animal itself makes).
Now, once the rat recognizes that it receives a piece of food every time it presses the lever, the
behavior of lever-pressing becomes reinforced. That is, the food pellets serve as reinforcers
because they strengthen the rat’s desire to engage with the environment in this particular manner.
In a parallel example, imagine that you’re playing a street-racing video game. As you drive
through one city course multiple times, you try a number of different streets to get to the finish
line. On one of these trials, you discover a shortcut that dramatically improves your overall time.
You have learned this new path through operant conditioning. That is, by engaging with your
environment (operant responses), you performed a sequence of behaviors that that was positively
reinforced (i.e., you found the shortest distance to the finish line). And now that you’ve learned
how to drive this course, you will perform that same sequence of driving behaviors (just as the
rat presses on the lever) to receive your reward of a faster finish.
Operant conditioning research studies how the effects of a behavior influence the probability that
it will occur again. For example, the effects of the rat’s lever-pressing behavior (i.e., receiving a
food pellet) influences the probability that it will keep pressing the lever. For, according to
Thorndike’s law of effect, when a behavior has a positive (satisfying) effect or consequence, it is
likely to be repeated in the future. However, when a behavior has a negative (painful/annoying)
consequence, it is less likely to be repeated in the future. Effects that increase behaviors are
referred to as reinforcers, and effects that decrease them are referred to as punishers.
An everyday example that helps to illustrate operant conditioning is striving for a good grade in
class—which could be considered a reward for students (i.e., it produces a positive emotional
response). In order to get that reward (similar to the rat learning to press the lever), the student
needs to modify his/her behavior. For example, the student may learn that speaking up in class
gets him/her participation points (a reinforcer), so the student speaks up repeatedly. However, the
student also learns that s/he shouldn’t speak up about just anything; talking about topics
unrelated to school actually costs points. Therefore, through the student’s freely chosen
behaviors, s/he learns which behaviors are reinforced and which are punished.
An important distinction of operant conditioning is that it provides a method for studying how
consequences influence “voluntary” behavior. The rat’s decision to press the lever is voluntary,
in the sense that the rat is free to make and repeat that response whenever it wants. Classical
conditioning, on the other hand, is just the opposite—depending instead on “involuntary”
behavior (e.g., the dog doesn’t choose to drool; it just does). So, whereas the rat must actively
participate and perform some kind of behavior to attain its reward, the dog in Pavlov’s
experiment is a passive participant. One of the lessons of operant conditioning research, then, is
that voluntary behavior is strongly influenced by its consequences.
[Image courtesy
of Bernard W. Balleine]
The illustration above summarizes the basic elements of classical and instrumental conditioning.
The two types of learning differ in many ways. However, modern thinkers often emphasize the
fact that they differ—as illustrated here—in what is learned. In classical conditioning, the animal
behaves as if it has learned to associate a stimulus with a significant event. In operant
conditioning, the animal behaves as if it has learned to associate a behavior with a significant
event. Another difference is that the response in the classical situation (e.g., salivation)
is elicited by a stimulus that comes before it, whereas the response in the operant case is not
elicited by any particular stimulus. Instead, operant responses are said to be emitted. The word
“emitted” further conveys the idea that operant behaviors are essentially voluntary in nature.
Understanding classical and operant conditioning provides psychologists with many tools for
understanding learning and behavior in the world outside the lab. This is in part because the two
types of learning occur continuously throughout our lives. It has been said that “much like the
laws of gravity, the laws of learning are always in effect” (Spreat & Spreat, 1982).
Classical conditioning is also involved in other aspects of eating. Flavors associated with certain
nutrients (such as sugar or fat) can become preferred without arousing any awareness of the
pairing. For example, protein is a US that your body automatically craves more of once you start
to consume it (UR): since proteins are highly concentrated in meat, the flavor of meat becomes a
CS (or cue, that proteins are on the way), which perpetuates the cycle of craving for yet more
meat (this automatic bodily reaction now a CR).
In a similar way, flavors associated with stomach pain or illness become avoided and disliked.
For example, a person who gets sick after drinking too much tequila may acquire a profound
dislike of the taste and odor of tequila—a phenomenon called taste aversion conditioning. The
fact that flavors are often associated with so many consequences of eating is important for
animals (including rats and humans) that are frequently exposed to new foods. And it is clinically
relevant. For example, drugs used in chemotherapy often make cancer patients sick. As a
consequence, patients often acquire aversions to foods eaten just before treatment, or even
aversions to such things as the waiting room of the chemotherapy clinic itself (see Bernstein,
1991; Scalera & Bavieri, 2009).
Another interesting effect of classical conditioning can occur when we ingest drugs. That is,
when a drug is taken, it can be associated with the cues that are present at the same time (e.g.,
rooms, odors, drug paraphernalia). In this regard, if someone associates a particular smell with
the sensation induced by the drug, whenever that person smells the same odor afterward, it may
cue responses (physical and/or emotional) related to taking the drug itself. But drug cues have an
even more interesting property: They elicit responses that often “compensate” for the upcoming
effect of the drug (see Siegel, 1989). For example, morphine itself suppresses pain; however, if
someone is used to taking morphine, a cue that signals the “drug is coming soon” can actually
make the person more sensitive to pain. Because the person knows a pain suppressant will soon
be administered, the body becomes more sensitive, anticipating that “the drug will soon take care
of it.” Remarkably, such conditioned compensatory responses in turn decrease the impact of
the drug on the body—because the body has become more sensitive to pain.
This conditioned compensatory response has many implications. For instance, a drug user will be
most “tolerant” to the drug in the presence of cues that have been associated with it (because
such cues elicit compensatory responses). As a result, overdose is usually not due to an increase
in dosage, but to taking the drug in a new place without the familiar cues—which would have
otherwise allowed the user to tolerate the drug (see Siegel, Hinson, Krank, & McCully, 1982).
Conditioned compensatory responses (which include heightened pain sensitivity and decreased
body temperature, among others) might also cause discomfort, thus motivating the drug user to
continue usage of the drug to reduce them. This is one of several ways classical conditioning
might be a factor in drug addiction and dependence.
A final effect of classical cues is that they motivate ongoing operant behavior (see Balleine,
2005). For example, if a rat has learned via operant conditioning that pressing a lever will give it
a drug, in the presence of cues that signal the “drug is coming soon” (like the sound of the lever
squeaking), the rat will work harder to press the lever than if those cues weren’t present (i.e.,
there is no squeaking lever sound). Similarly, in the presence of food-associated cues (e.g.,
smells), a rat (or an overeater) will work harder for food. And finally, even in the presence of
negative cues (like something that signals fear), a rat, a human, or any other organism will work
harder to avoid those situations that might lead to trauma. Classical CSs thus have many effects
that can contribute to significant behavioral phenomena.
[Image courtesy
of Bernard W. Balleine]
Blocking and other related effects indicate that the learning process tends to take in the most
valid predictors of significant events and ignore the less useful ones. This is common in the real
world. For example, imagine that your supermarket puts big star-shaped stickers on products that
are on sale. Quickly, you learn that items with the big star-shaped stickers are cheaper. However,
imagine you go into a similar supermarket that not only uses these stickers, but also uses bright
orange price tags to denote a discount. Because of blocking (i.e., you already know that the star-
shaped stickers indicate a discount), you don’t have to learn the color system, too. The star-
shaped stickers tell you everything you need to know (i.e. there’s no prediction error for the
discount), and thus the color system is irrelevant.
Classical conditioning is strongest if the CS and US are intense or salient. It is also best if the CS
and US are relatively new and the organism hasn’t been frequently exposed to them before. And
it is especially strong if the organism’s biology has prepared it to associate a particular CS and
US. For example, rats and humans are naturally inclined to associate an illness with a flavor,
rather than with a light or tone. Because foods are most commonly experienced by taste, if there
is a particular food that makes us ill, associating the flavor (rather than the appearance—which
may be similar to other foods) with the illness will more greatly ensure we avoid that food in the
future, and thus avoid getting sick. This sorting tendency, which is set up by evolution, is
called preparedness.
There are many factors that affect the strength of classical conditioning, and these have been the
subject of much research and theory (see Rescorla & Wagner, 1972; Pearce & Bouton, 2001).
Behavioral neuroscientists have also used classical conditioning to investigate many of the basic
brain processes that are involved in learning (see Fanselow & Poulos, 2005; Thompson &
Steinmetz, 2009).
Observational Learning
Not all forms of learning are accounted for entirely by classical and operant conditioning.
Imagine a child walking up to a group of children playing a game on the playground. The game
looks fun, but it is new and unfamiliar. Rather than joining the game immediately, the child opts
to sit back and watch the other children play a round or two. Observing the others, the child takes
note of the ways in which they behave while playing the game. By watching the behavior of the
other kids, the child can figure out the rules of the game and even some strategies for doing well
at the game. This is called observational learning.
Children observing a social model (an experienced chess player) to learn the rules and strategies
of the game of chess. [Image: David R. Tribble, https://goo.gl/nWsgxI, CC BY-SA 3.0,
https://goo.gl/uhHola]
Observational learning is a component of Albert Bandura’s Social Learning Theory (Bandura,
1977), which posits that individuals can learn novel responses via observation of key others’
behaviors. Observational learning does not necessarily require reinforcement, but instead hinges
on the presence of others, referred to as social models. Social models are typically of higher
status or authority compared to the observer, examples of which include parents, teachers, and
police officers. In the example above, the children who already know how to play the game
could be thought of as being authorities—and are therefore social models—even though they are
the same age as the observer. By observing how the social models behave, an individual is able
to learn how to act in a certain situation. Other examples of observational learning might include
a child learning to place her napkin in her lap by watching her parents at the dinner table, or a
customer learning where to find the ketchup and mustard after observing other customers at a hot
dog stand.
Conclusion
We have covered three primary explanations for how we learn to behave and interact with the
world around us. Considering your own experiences, how well do these theories apply to you?
Maybe when reflecting on your personal sense of fashion, you realize that you tend to select
clothes others have complimented you on (operant conditioning). Or maybe, thinking back on a
new restaurant you tried recently, you realize you chose it because its commercials play happy
music (classical conditioning). Or maybe you are now always on time with your assignments,
because you saw how others were punished when they were late (observational learning).
Regardless of the activity, behavior, or response, there’s a good chance your “decision” to do it
can be explained based on one of the theories presented in this module.
Outside Resources
Article: Rescorla, R. A. (1988). Pavlovian conditioning: It’s not what you think it is. American
Psychologist, 43, 151–160.
Book: Bouton, M. E. (2007). Learning and behavior: A contemporary synthesis. Sunderland,
MA: Sinauer Associates.
Book: Bouton, M. E. (2009). Learning theory. In B. J. Sadock, V. A. Sadock, & P. Ruiz
(Eds.), Kaplan & Sadock’s comprehensive textbook of psychiatry (9th ed., Vol. 1, pp. 647–658).
New York, NY: Lippincott Williams & Wilkins.
Book: Domjan, M. (2010). The principles of learning and behavior (6th ed.). Belmont, CA:
Wadsworth.