0% found this document useful (0 votes)
33 views

3.2. Behavioral Learning Theories Lecture

Lecture notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

3.2. Behavioral Learning Theories Lecture

Lecture notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Chapter 4: Behavioral Learning Theories

Give me a child, and I will shape him into anything.

-B.F. Skinner

Introduction

Some of the most popular learning theories are lodged under behaviorism. It is
primarily concerned with influencing change in one's behavior. Hence, in this chapter, you
will discover the fundamental tenets of behaviorism and how such concepts are applied to
facilitating learner-centered classroom. It is therefore important for you to be immersed into
the principles so that you can easily use them in facing the real world of teaching. In this
chapter, you are expected to:

• analyze learning theories under behaviorism;

• discuss the different phenomena of learning under behaviorism, and

• cite applications of behaviorism theories to teaching.

Lesson 1: Pavlov's Classical Conditioning

Pavlovian Conditioning

In the parlance of psychology, behaviorism is concerned with the behavioral changes


and the role of the environment in these changes. Behaviorists claim that nurture is crucial in
the process of acquiring knowledge (Dastpak et al., 2017). One known behaviorist is John B.
Watson (1982), who writes that the ultimate goal of behaviorism is to derive laws to explain
the relationships existing among antecedent conditions (stimuli), behavior (responses), and
following conditions (rewards, punishments, or neutral effects). The theory of behaviorism
may be dichotomized into associationism and reinforcement.

The name Ivan Pavlov (1849-1936) rings a bell within the context of the association
theory in behaviorism. Pavlov was a physiologist, who, out of serendipity, discovered
classical conditioning. In 1904, he won the Nobel Prize for his outstanding studies on the
physiology of digestion. He spent the rest of his life studying reflexes of dogs, which led him
to the discovery of classical conditioning, also known as the association theory.

Still recognized as an essential parcel of contemporary psychological knowledge and


classical conditioning has become the basis for many early learning theories. In his
discovery, Pavlov found out that the sight of food does not only trigger the salivation of the
dog, but any other stimulus may result to such effect if paired with the food (Le Francois,
2000). In another version, the salivation of the dog is influenced by associating the steps of
the attendant with the food (Schunk, 2012).

Pavlovian Conditioning in a Nutshell

The theory of Pavlovian conditioning involves a set of multilayered procedures.


Initially, the food is called the unconditioned stimulus (UCS). In psychology, any
environmental event that affects the organism is called stimulus. The food is an
unconditioned stimulus because it leads to an unconditioned response (UCR) without any
learning taking place. The immediate salivation of the dog is referred to as the UCR. The
UCS and UCR are considered unlearned stimulus-response units termed as reflexes.

Conditioning the dog requires recurrent presentation of a neutral stimulus paired with
the UCS. For instance, the buzzer was sounded repeatedly but caused no dog's salivation at
all. However, when the buzzer came with the food, the dog salivated. Later, by just hearing
the buzzer, the dog salivated (see Figure 14). The buzzer is now called a conditioned
stimulus (CS) that elicited the salivation of the dog, now termed as a conditioned
response (CR).

When applied in the classroom, the use of the pointer or stick to whip unruly learners
in class may affect other pupils. They may associate the stick with whipping, thus triggering
fear. Later, merely hearing or seeing a stick in class may elicit fear among them. This is why
expert educators in the country suggest that classroom teachers should avoid using the stick
as pointers. Instead, they are advised to use their open palm to pointing words on the
chalkboard.

Other Phenomena in Classical Conditioning

According to Buoton and Moody (2004), when the CS is repeatedly unreinforced, that
is, without the UCS, the CR will eventually diminish in intensity and effect. This phenomenon
is called extinction. When the extinction of learning passes through time, the CR may still
be restored (Robins, 1990). This recovery-after-extinction phenomenon is termed as
spontaneous recovery. This means that extinction does not completely involve unlearning
of the pairings (Redish et al., 2018).

Generalization is another phenomenon in Pavlovian's classical conditioning (Figure


15). When the dog salivates by just merely hearing the buzzer, it is likely to elicit similar CS
when it hears a faster or a slower beat of the buzzer or any device with quite a similar sound.
Harris (2006), however, pointed out that the more different the new stimulus to the CS, the
lesser generalization surfaces.
When the dog recognizes that the sound of the buzzer is different from other stimuli
(ie., the sound of a bell), thus salivating only upon hearing the buzzer, discrimination
occurs. This is a phenomenon when the subject reacts differently to other stimuli. This
means that it can decipher the CS very strongly.

Watsonian Conditioning

During the dawn of the 20th century, a psychologist, greatly influenced by Pavlov,
rose and aimed to revolutionize the status of American psychology. He was John Broadus
Watson (1878-1958). According to Watson, if Pavlov is successful in proving associationism
between stimulus and response, people can also have such ability to associating certain
feelings, behaviors, instances, and even symbols. He theorized that unlearning and
relearning can occur. He also posited that humans are born with emotional responses such
as love, fear, and hate.

Perhaps the most popular conditioning experiment he did was "Little Albert." Here,
Watson tried to prove that emotions can be learned. Initially, Albert played with the white rat,
thus not eliciting any fear upon seeing the rat. After some time, Watson and his partner,
Rosalie Rayner, accompanied the appearance of the white rat with a banging sound, so
Albert was conditioned to fear the rat. Later, Watson and Rayner accompanied the
presentation of the rat with other objects. They found out that Albert also feared the

occurrence of the objects even without the rat's presence. This experiment became the
anchor of Watson's belief that learning happens by association.

Lesson 2: Thorndikes's Connectionism

Within the first half of the 21st Century in the United States, Edward L. Thorndike
(1874-1949) was prominent because of his laws of learning, primarily under the umbrella of
associationism or connectionism (Mayer, 2003). It is mainly concerned with the connection
between the stimulus and response (S-R). According to Karadut (2012), Thorndike is one
of the few psychologists who focused on education. In proving his findings, Thorndike used
an experimental approach in measuring a student's academic achievement. Thorndike
believed that forming associations or connections between sensory experiences and neural
impulses results in the prime type of learning. The neural impulses, called responses, are
behaviorally manifested. He believed that learning often occurs by trial and error (selecting
and connecting).

Laws of Learning

Thorndike's basic ideas rest in the laws of exercise and effect. Firstly, the Law of
Exercise is divided into two parts: the law of use and the law of disuse. The law of use
means that the frequent recurring of the response to a stimulus strengthens their connection.
Meanwhile, the law of disuse means that when a response is not made to a stimulus, the
connection's strength is weakened or even forgotten.

Drills are vital to acquire and sustain learning. In the very words of Thorndike (1913),
bonds between stimuli and responses are strengthened through being exercised frequently,
recently, and "vigorously." Learners usually learn faster when they often apply a certain skill
(e.g., spelling new terms) and tend to forget when such a response does not recur over
some time (Karadut, 2012). This explains why pianists, for example, repeatedly practice their
pieces before their performances. By practicing (law of use), they ensure that they will play
correctly. If they do not exercise playing their pieces (law of disuse), they may encounter
difficulty in smoothly accomplishing their performances.

Thorndike later revised the Law of Exercise. He confessed that by merely practicing,
one does not bring improvement in learning. Practicing, according to Thorndike, is not
sufficient. Hence, the constant practice must be followed by some reward or satisfaction to
the learner. In short, the pupil must be motivated to learn.

The Law of Effect, meanwhile, emphasizes that if a response is followed by a


"satisfying" state of affairs, the S-R connection is strengthened, if a response is followed by
an "annoying" state of affairs, the S-R connection is weakened. Thus, Thorndike posited that
satisfiers and annoyers are critical to learning. This explains why teachers give favorable
comments to students who show pleasant behavior in class, when such ego-boosting
comments satisfy the learners, the higher the chance that they will repeat such behavior.

The third law of learning also has something to do with boosting human motivation.
The law of readiness states that if one is prepared to act, to do so is rewarding, and not to do
so is punishing. In short, before learning commences, one must be physically, emotionally,
mentally, and psychologically prepared. This law is illustrated when a learner knows the
answer to a particular question, thus raising his or her hand. Calling him or her to recite is
rewarding. However, when the teacher calls on a student who does not know the answer
may be annoying on his or her part, thus weakening the bond of stimulus and response. The
law of readiness is also used in sequencing topics. When students are ready to learn a
particular action (in terms of developmental level or prior skill acquisition), then behaviors
that foster this learning will be rewarding. Meanwhile, when students are not ready to learn
or do not possess prerequisite skills, then attempting to learn is punishing and even
becomes a waste of time.

Other Laws of Learning


Thomdike also observed that the first thing learned has the strongest S-R bond and
is almost inerasable. He calls this as the Law of Primacy. It implies that learning a concept
or skill again is more difficult than the first time one has learned it. This explains why
teachers correct students who have misconceptions in a new lesson. The application part in
a lesson plan or daily lesson log is strategically situated before generalizing a concept so
that teachers can detect the misunderstandings of the students in a certain lesson. When the
misconception is not corrected for the first time, that may lead to habit formation. In English
Language Teaching, a recurring mistake among learners is called fossilization (Demirezen &
Topal, 2015). Relearning the correct concept later will be confusing to the students or even
time-consuming. Hence, the first (prime) learning experience should be as functional, as
precise, and as positive as possible so that it paves the way to the more comfortable
learning experiences to follow.

As much as possible, teachers provide activities that come with extreme relevance to
the learners. This teaching principle is primarily rooted in Thorndike's Law of Intensity.
Thorndike believed that exciting, immediate, or even dramatic learning within the real context
of the students would tremendously facilitate learning. Hence, the Law of Intensity implies
that exposing the students in real- world applications of the skills and concepts makes them
most likely to remember the experience. The current K to 12 curriculum of the country
immerses senior high school students to a short-time real- world application called "on-the-
job training" or OJT. They receive a foretaste of how the skills and concepts they learn in
class are applied in the real workplace In that sense, the learning experience becomes more
intense and will most likely be remembered.

The concepts or skills most recently learned are least forgotten. This is the gist of the
Law of Recency. Thus, when learners are isolated in time from learning a new concept, the
more difficult it is for them to remember. For instance, in a foreign language class (e.g.
French), it is easier to recall and recite those which are learned minutes ago than those
which were taught the other month. This implies that teachers should facilitate learning by
providing the learners with a clear connection between the previous and the current learning
experience. Letting the students mention or apply the formerly learned skill or concept in the
new learning experience may refresh their memory, thus the higher the probability of
forgetting.

Thorndike also mentioned that humans tend to show an almost similar response to
an entirely different stimulus if, on recurring instances, that stimulus has slight changes
compared to the previously known one. Thorndike coins this as the Principle of
Associative Shifting. For example, to teach pupils to add a three-digit number, teachers let
them master the adding of a one-digit number first. As they solve increasing numbers, pupils
will tend to associate the response to the previously paired S-R.

The transfer occurs when the contexts of learning have identical elements and call
for similar responses. Thorndike called it as generalization (Thorndike, 1913). This implies
that not only skills should be taught in one isolated topic, but also that other related subjects
or topics should provide opportunities for the students to apply them. In a Social Studies
class, it is not enough to teach the students to read maps, but it is better if they are also
taught to calculate miles from inches. Later, that skill is reinforced when they will create their
maps and map problems to solve.
Lesson 3: Skinner's Operant Conditioning

One of the most popular behavioral theorists of all time is B.F. (Burrhus Frederic)
Skinner (1904-1990). He postulated the operant conditioning. Classical conditioning refers
to the association of stimuli whereas operant conditioning actively involves the subject's
participation. The subject in operant conditioning has a choice to respond. In other words,
operant conditioning is the type of learning whereby learning occurs as a consequence of
the learner's behavior.

B.F. Skinner made this conclusion after experimenting on animals through his
Skinner's box, a device that modified the animal's behavior. In his experiment, he put a rat in
a box with a lever, a bowl. and a closed chamber. If the lever was pushed, the chamber
opened and dispensed food. Unconscious about this mechanism, the rat accidentally
pushed the lever, and the food was dispensed. The rat learned that continuously pushing the
lever could open the food dispenser to the bowl. Skinner termed the food in such an
experiment as the reward.

Reinforcement

Skinner's operant condition is dichotomized into reinforcement and punishment. Each


category is also divided into positive or negative. Reinforcement is defined as something
that strengthens the behavior or is sometimes called as the response strengthener (Schultz,
2006). Positive reinforcement is defined as the addition of a pleasant stimulus. This is
exactly what is illustrated in the Skinner's box. The dispensed food became a positive
reinforcement that caused the rat to continually push the lever (behavior).

Positive reinforcement has many classroom applications. Preschool teachers stamp


three big stars on the hands of their pupils who may have behaved throughout the class,
achieved the highest score, or become friendly within the academic time. To maximize the
use of the positive reinforcement, however, teachers should make it clear to their students
why they are stamping them three stars and what the three big stars mean. In that way, the
pupils will be motivated to repeat their pleasant behavior and can eventually gain the reward-
the stamp.

By building operant conditioning techniques into lesson plans, it is easily possible to


teach children useful skills as well as good behaviors. By using symbols like smiley faces.
"Good Work" stamps, stickers, and even simple ticks when a child does something correctly,
you are encouraging them to repeat such satisfying work further down the line.

Meanwhile, negative reinforcement is taking something away from a situation that


subsequently increases the occurrence of the response. In other words, it is taking away an
unpleasant consequence to cause the behavior to happen again. Some stimuli that often
function as negative reinforcers are loud noises, criticisms, annoying people, and low
grades, because actions that remove them tend to be reinforcing. For instance, Teacher X
wants her Grade 3 class to master the multiplication table, so she gives the pupils a problem
set on multiplication. After a set is solved, they would recite the multiplication table from
multiples of 5 to 10. If they master the multiplication table, the problem set is withdrawn, thus
strengthening the behavior-perfectly reciting the multiplication table.

Schedule of reinforcements

According to Skinner (1938), as mentioned by Zeiler (1977), schedules refer to when


reinforcement applied (Skinner, 1938, Zeiler, 1977) Table 1 summarizes the reinforcement
schedules according to Skinner.

Table 1. Reinforcement schedules according to Skinner

Reinforcement Schedule Description Classroom Application

Continuous Schedule Reinforcement is given Students receive feedback


every time the animal gives after each response
the desired response. concerning the accuracy of
their work.

Intermittent Schedule Reinforcement is given Students are not called on


irregularly as the animal every time they raise their
gives the desired response. hands, not praised after
working each problem, and
not always told they are
behaving appropriately.

Fixed interval The time interval is constant Appreciating a student’s


from one reinforcement to answer is done for the first
the next. response made after 5
minutes.

Variable The time interval varies from The first correct response
interval occasion to occasion after 5 minutes is reinforced,
around some average but the time interval varies
value. (e.g.,2,3,7, or 8 minutes).

Ratio Schedule Reinforcement is given Teacher gives praises to a


depending on the number of student after reciting the fifth
correct responses or the correct answer.
rate of responding.

Fixed ratio Every nth correct response Every 10th correct response
is reinforced, where n is receives reinforcement.
constant.

Variable ratio Every nth correct response A teacher may give free
is reinforced, but the value time periodically around an
varies around an average average of completed
number n. assignments.

Punishment

Operant conditioning also includes punishment, whose main aim is to weaken the
response. However, punishment does not necessarily eliminate the behavior, when the
threat of punishment is removed, the punished response may recur (Merrett & Wheldall,
1984). Skinner believed that positive punishment is an addition of an unpleasant stimulus
to decrease the behavior. For instance. Max, a grade 6 pupil, had been neglecting his Math
assignments. He completely hated washing the dishes. To decrease such behavior of
neglecting his assignments, her parents assigned him to wash the dishes after dinner. After
some time, Max eventually became more diligent to complete his assignments in Math. The
addition (positive) of the work Max hates (punishment) decreases the likelihood for the
behavior (neglect of doing the assignments) to occur.

Negative punishment, meanwhile, is the removal of rewarding stimulus to decrease


the behavior For example, Jennie, a grade 3 pupil, is always noisy in a group activity. Her
teacher calls her attention and warns her that she could not participate in the subsequent fun
activity if she continues to behave noisily Joining in a fun activity is a pleasant stimulus.
Withdrawing it (negative) is believed to reduce noisy behavior (punishment).

Table 2. Relationship of reinforcement and punishment

Reinforcement Punishment

(increasing the behavior) (decreasing the behavior)

Positive Adding something to Adding something to


increase the behavior decrease the behavior
(adding)

Negative Subtracting something to Subtracting something to


increase the behavior decrease the behavior
(subtracting)

Alternatives to Punishment

Punishment is often applied in schools to address disruptions. Maag (2001)


enumerated some common punishments like loss of privileges, removals from the
classroom, in- and out-of-school suspensions, and expulsions. Nonetheless, there are
several alternatives to punishment (see figure 18). The primary advantage of this alternative
over punishment is that it shows the student how to behave adaptively.

Change the Allow the Extinguish the Condition an


discriminative stimuli unwanted behavior unwanted behavior incompatible
to continue behavior
 Move  Have  Ignore minor  Reinforce
misbehaving student misbehavior learning
student away who stands so that it is progress,
from other when he or not which occurs
misbehaving she should reinforced by only when a
students. be sitting teacher student is not
continue to attention. misbehaving.
stand.

Lesson 4: Neo-Behaviorism

As behaviorism developed, one more sub-branch came out to fill in the gap between
behaviorism and cognitive learning beliefs. It is called neo-behaviorism. Notable
psychologists that contributed much in the development of neo-behaviorism included
Edward Tolman and Albert Bandura. The neo- behaviorists were more self-consciously
trying to formalize the laws of behavior. They believed that some mediating variables into the
established stimulus-response theory contribute much to learning.

Tolman's Purposive Behaviorism

Purposive learning encapsulates Edward Tolman's theory. He insisted that all


behavior is directed because of a purpose. Hence, all behaviors are focused on achieving
some goals by cognition-an intervening variable. For Tolman, a behavior is never merely the
result of mindless S-R connections. He further believed that "mental processes are to be
identified in terms of the behaviors to which they lead." In other words, his intervening
variables are tied to observable behaviors.

In his experiment, two groups of rats were put in mazes for 17 days. The first group
of rats was fed (rewarded) every time they found their way out. The second group of rats
was non-reinforced. The rats did not receive any food from days 1 to 10 even if they have
seen the end point. Later, it was observed that in the first 10 days, the rats developed a
cognitive map. Hence, from day 11 onward, they were motivated to perform and look for the
end point faster than the first group to find food because they were hungry.

From this experiment, Tolman concluded that an organism performs a behavior


because it has a purpose or a goal. It has also led to the birth of latent learning- a form of
learning that occurs without any visible reinforcement of the behavior or associations that are
leamed. In addition, latent learning occurs every time an organism sees a reason to perform
or show it. For instance, a 4-year-old boy observed his father in using the TV remote control.
When he would be left alone and had the opportunity to turn on the TV using the remote
control, he could easily demonstrate the learning.

Another distinctive feature of the purposive behaviorism is the coining of the term
"cognitive map." According to Tolman, it is a mental illustration of the layout of the
environment. It is believed that everything in our cognitive map influences our interaction
with the environment. Hence, making our cognitive map more detailed and comprehensive
helps facilitate our learning.
Tolman's Other Salient Principles

1. Behavior is always purposive. By this, he meant that all behavior is ignited to


accomplish a specific goal. In its purest sense, a demonstration of learning is the outcome of
possessing a purpose to show it.

2. Behavior is cognitive. The expectations that underlie and guide behavior are cognitions.
This means that an organism is mindful of the connections between specific actions and
certain outcomes (cognitive map). Such mental map is developed by expanding the
experiences, coupled with the stimuli and rewards Notably, Tolman considered a cognition
as an abstraction or a theoretical invention. He believed that cognitions should only be
inferred from behavior, not through introspection.

3. Reinforcement establishes and confirms expectancies. Tolman also underscored the


role of reinforcement in learning. As previously stated, learning, according to Tolman, deals
with connections between stimuli and expectancies or perceptions, representations, needs,
and other intervening variables. Because expectancies develop in situations in which
reinforcement is possible, the role of reinforcement is primarily one of confirming
expectancies. The more often an expectancy is established, the more likely it is that the
stimuli (signs) associated with it will become linked with the relevant significate (expectancy).

Bandura's Social Learning Theory

Under the social learning theory, learning occurs within the social context and by
observing and copying others' behavior or imitation (Akers & Jensen, 2006). Albert Bandura
is the proponent of this theory, where modeling is a crucial component. Modeling refers to a
change in one's behavior by observing models (Rosenthal & Bandura, 1978). Historically,
modeling was equated with imitation, but modeling is a more inclusive concept (Mussen,
1983).

Bandura's theory is also called the social-cognitive theory because of the influence
of cognition in his theory. He is one among few behaviorists who believed that humans
process information through cognition. The term self-efficacy has bridged social learning
theory and cognitive psychology. Self- efficacy is defined as one's evaluation of his or her
own ability to accomplish or perform an action in a particular context. Those with high self-
efficacy see themselves as capable, or useful, in dealing with the world and with other
people.

The following are the fundamental principles of social learning theory:

1. One may learn without changing his or her behavior. This is in contrast to what other
behaviorists discussed earlier; for them, a change in behavior is always an indication of
learning.

2. Learning takes place by imitating a model. That model possesses characteristics (ie.,
intelligence, physical aura, popularity, or talent) that a learner finds attractive and desirable.
Admiration plays an essential role in imitating a particular behavior of the model. This
explains why speech teachers recite a crucial sound first, then guide the learners until they
can recite the sound correctly by themselves.
3. An observing person will always react to the one being imitated depending on
whether the model is rewarded or punished. If the model receives rewards, the imitator
copies the behavior, and if the former is punished, the latter will most likely avoid copying the
behavior.

4. Acquiring and performing behavior are different. Bandura made a demarcation line
between performing and acquiring a behavior. One can acquire the behavior by observing
someone but may opt not to perform it until the context requires so.

5. Interaction is vital for successful social learning. Social learning may occur
successfully when learners interact with their co-learners and models (Mourlam, 2013).
Learning in isolation may dampen self-efficacy. This means that copying behavior involves
the guiding of one person's behavior by another person, such as when an art instructor gives
guidance and corrective feedback to an art student who is attempting to draw a picture. With
copying behavior, the final "copied" response is reinforced and thereby strengthened.

6. Learning is self-regulated. Bandura noted that self-regulation occurs when individuals


observe, assess, and judge their behavior against their standards, and subsequently reward
and punish them.

7. Learning may be acquired vicariously. Vicarious learning is acquired from observing


the consequences of others' behavior. For instance, when a model is given praises and
rewards, the observer may likely repeat the copied behavior because he or she feels the
same satisfaction, too.

8. Learning may be reinforced by the model or by others. Compliments coming from the
model may strengthen the occurrence of the behavior. Similarly, when a person is praised by
his or her peers because of a change in behavior, he or she may show an increase in that
behavior.

Components of Successful Modeling

1. Attention. To meaningfully perceive relevant behaviors, one should pay attention. At any
given moment, one can attend to many activities. The characteristics of the model and the
observer influence one's attention to models. This explains why teachers make use of bright
colors or large fonts in their instructional aids for modeling to snatch the attention of the
learners.

2. Retention. Paying attention to something should result in retention that requires


cognitively organizing, rehearsing, coding, and transforming modeled information for storage
in memory. Rehearsal also serves a vital role in the retention of knowledge. This is a mental
review of information. Sometimes, the observer retains the information through association
and cognitive pattern. In a dance class, for instance, an observer counts 1-2-3-4 and 5 with
corresponding steps to store the dance steps in his or her memory. Rehearsal without
coding and coding without rehearsal are less effective.

3. Production. To strengthen learning through observation, one needs to translate the visual
and symbolic conceptions into observable behavior. Subsequent production of this behavior
indicates an increase in learning. Bandura noted that observers refine their skills with
practice, corrective feedback, and reteaching. Sometimes, problems in producing modeled
behaviors arise not only because information is inadequately coded but also because
learners experience difficulty translating coded information in memory into overt action. For
example, a child may have a basic understanding of how to tie shoelaces but not be able to
translate that knowledge into behavior. Teachers who suspect that students are having
trouble demonstrating what they have learned may need to test students in different ways.

4. Motivation. Influencing observational learning is motivation. Individuals perform actions


they believe will result in rewarding outcomes and avoid acting in ways they think will be
responded to negatively (Schunk, 1987). Persons also act based on their values, performing
activities they value and avoiding those they find unsatisfying, regardless of the
consequences to themselves or others. Motivation is a critical process of observational
learning that teachers promote in various ways, including making learning interesting,
relating material to student interests, having students set goals and monitor goal progress,
providing feedback indicating increasing competence, and stressing the value of learning.

You might also like