0% found this document useful (0 votes)
131 views

Definition of Training

1. Training involves developing skills through instruction and practice to meet job requirements. It helps employees perform their jobs correctly and effectively. 2. Training is defined as organized learning activities that teach employees skills and knowledge needed for their specific jobs and roles within an organization. 3. Evaluation is important to assess whether training was effective by collecting data on participant satisfaction, learning, and ability to apply skills on the job. This provides feedback to improve future training and ensures the return on investment.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
131 views

Definition of Training

1. Training involves developing skills through instruction and practice to meet job requirements. It helps employees perform their jobs correctly and effectively. 2. Training is defined as organized learning activities that teach employees skills and knowledge needed for their specific jobs and roles within an organization. 3. Evaluation is important to assess whether training was effective by collecting data on participant satisfaction, learning, and ability to apply skills on the job. This provides feedback to improve future training and ensures the return on investment.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Introduction

Training constitutes a basic concept in human resource development. It is concerned


with developing a particular skill to a desired standard by instruction and practice.
Training is a highly useful tool that can bring an employee into a position where they
can do their job correctly, effectively, and conscientiously. Training is the act of
increasing the knowledge and skill of an employee for doing a particular job.

Definition of Training:
Dale S. Beach defines training as the organized procedure by which people learn
knowledge and/or skill for a definite purpose. Training refers to the teaching and
learning activities carried on for the primary purpose of helping members of an
organization acquire and apply the knowledge, skills, abilities, and attitudes needed by
a particular job and organization.
According to Edwin Flippo, training is the act of increasing the skills of an employee
for doing a particular job.

Need for Training:


Every organization should provide training to all the employees irrespective of their
qualifications and skills.
Specifically the need for training arises because of following reasons:
1. Environmental changes:
Mechanization, computerization, and automation have resulted in many changes that
require trained staff possessing enough skills. The organization should train the
employees to enrich them with the latest technology and knowledge.
2. Organizational complexity:
With modern inventions, technological upgradation, and diversification most of the
organizations have become very complex. This has aggravated the problems of

coordination. So, in order to cope up with the complexities, training has become
mandatory.
3. Human relations:
Every management has to maintain very good human relations, and this has made
training as one of the basic conditions to deal with human problems.
4. To match employee specifications with the job requirements and
organizational needs:
An employees specification may not exactly suit to the requirements of the job and
the organization, irrespective of past experience and skills. There is always a gap
between an employees present specifications and the organizations requirements. For
filling this gap training is required.
5. Change in the job assignment:
Training is also necessary when the existing employee is promoted to the higher level
or transferred to another department. Training is also required to equip the old
employees with new techniques and technologies.

Importance of Training:
Training of employees and mangers are absolutely essential in this changing
environment. It is an important activity of HRD which helps in improving the
competency of employees. Training gives a lot of benefits to the employees such as
improvement in efficiency and effectiveness, development of self confidence and
assists every one in self management.
The stability and progress of the organization always depends on the training imparted
to the employees. Training becomes mandatory under each and every step of
expansion and diversification. Only training can improve the quality and reduce the
wastages to the minimum. Training and development is also very essential to adapt
according to changing environment.

Types of Training:
Various types of training can be given to the employees such as induction training,
refresher training, on the job training, vestibule training, and training for promotions.
Some of the commonly used training programs are listed below:
1. Induction training:
Also known as orientation training given for the new recruits in order to make them
familiarize with the internal environment of an organization. It helps the employees to
understand the procedures, code of conduct, policies existing in that organization.
2. Job instruction training:
This training provides an overview about the job and experienced trainers
demonstrates the entire job. Addition training is offered to employees after evaluating
their performance if necessary.
3. Vestibule training:
It is the training on actual work to be done by an employee but conducted away from
the work place.
4. Refresher training:
This type of training is offered in order to incorporate the latest development in a
particular field. This training is imparted to upgrade the skills of employees. This
training can also be used for promoting an employee.
5. Apprenticeship training:
Apprentice is a worker who spends a prescribed period of time under a supervisor.

Evaluation Meaning
Evaluation is an integral part of most instructional design (ID) models. Evaluation
tools and methodologies help determine the effectiveness of instructional
interventions. Despite its importance, there is evidence that evaluations of training

programs are often inconsistent or missing (Carnevale & Schulz, 1990; Holcomb,
1993; McMahon & Carter, 1990; Rossi et al., 1979). Possible explanations for
inadequate evaluations include: insufficient budget allocated; insufficient time
allocated; lack of expertise; blind trust in training solutions; or lack of methods and
tools (see, for example, McEvoy & Buller, 1990).
Part of the explanation may be that the task of evaluation is complex in itself.
Evaluating training interventions with regard to learning, transfer, and
organizational impact involves a number of complexity factors. These complexity
factors are associated with the dynamic and ongoing interactions of the various
dimensions and attributes of organizational and training goals, trainees, training
situations, and instructional technologies.
Evaluation goals involve multiple purposes at different levels. These purposes
include evaluation of student learning, evaluation of instructional materials,
transfer of training, return on investment, and so on. Attaining these multiple
purposes may require the collaboration of different people in different parts of an
organization. Furthermore, not all goals may be well-defined and some may
change.
Different approaches to evaluation of training indicating how complexity factors
associated with evaluation are addressed below. Furthermore, how technology can
be used to support this process is suggested. In the following section, different
approaches to evaluation and associated models are discussed. Next, recent studies
concerning evaluation practice are presented. In the final section, opportunities for
automated evaluation systems are discussed. The article concludes with
recommendations for further research.
Evaluation involves the assessment of the effectiveness of the training
programs. This assessment is done by collecting data on whether the participants
were satisfied with the deliverables of the training program, whether they learned
something from the training and are able to apply those skills at their workplace.
There are different tools for assessment of a training program depending upon the
kind of training conducted.

Since organisations spend a large amount of money, it is therefore important for


them to understand the usefulness of the same. For example, if a certain technical
training was conducted, the organisation would be interested in knowing whether
the new skills are being put to use at the workplace or in other words whether the
effectiveness of the worker is enhanced. Similarly in case of behavioural training,
the same would be evaluated on whether there is change in the behaviour, attitude
and learning ability of the participants.

Benefits of Training Evaluation


Evaluation acts as a check to ensure that the training is able to fill the competency
gaps within the organisation in a cost effective way. This is specially very
important in wake of the fact the organisations are trying to cut costs and increase
globally. Some of the benefits of the training evaluation are as under:
Evaluation ensures accountability - Training evaluation ensures that training
programs comply with the competency gaps and that the deliverables are not
compromised upon.
Check the Cost - Evaluation ensures that the training programs are effective in
improving the work quality, employee behaviour, attitude and development of new
skills within the employee within a certain budget. Since globally companies are
trying to cut their costs without compromising upon the quality, evaluation just
aims at achieving the same with training.
Feedback to the Trainer / Training - Evaluation also acts as a feedback to the
trainer or the facilitator and the entire training process. Since evaluation accesses
individuals at the level of their work, it gets easier to understand the loopholes of
the training and the changes required in the training methodology.
Not many organisations believe in the process of evaluation or at least do not have
an evaluation system in place. Many organisations conduct training programs year
after year only as a matter of faith and not many have a firm evaluation mechanism
in place. Organisations like IBM, Motorala only, it was found out, have a firm
evaluation mechanism in place.

The Way Forward

There are many methods and tools available for evaluating the effectiveness of
training programs. Their usability depends on the kind of training program that is
under evaluation. Generally most of the organisations use the Kirk Patrick model
for training evaluations which evaluates training at four levels - reactions, learning,
behaviour and results.
After it was found out that training costs organisations a lot of money and no
evaluation measures the return on investment for training, the fifth level for
training evaluation was added to the training evaluation model by Kirk Patrick
which is called as the ROI.
Most of the evaluations contain themselves to the reaction data, only few collected
the learning data, still lesser measured and analysed the change in behaviour and
very few took it to the level of increase in business results. The evaluation tools
including the Kirk Patrick model will be discussed in detail in other articles.
The process of examining a training program is called training evaluation. Training
evaluation checks whether training has had the desired effect. Training evaluation
ensures that whether candidates are able to implement their learning in their
respective workplaces, or to the regular work routines
Techniques of Evaluation
The various methods of training evaluation are:
Observation
Questionnaire
Interview
Self diaries
Self recording of specific incidents

Types of evaluation
Evaluating the Training (includes monitoring) addresses how one determines
whether the goals or objectives were met and what impact the training had on
actual performance on the job.
Generally there are four kinds of standard training evaluation:
Formative
Process
Outcome
Impact.
Formative evaluation provides ongoing feedback to the curriculum designers and
developers to ensure that what is being created really meets the needs of the
intended audience.
Process evaluation provides information about what occurs during training. This
includes giving and receiving verbal feedback.
Outcome evaluation determines whether or not the desired results (e.g., what
participants are doing) of applying new skills were Achieved in the short-term.
Impact determines how the results of the training affect the strategic goal
Evaluation Methods
Evaluation methods can be either qualitative (e.g., interviews, case studies, focus
groups) or quantitative (e.g., surveys, experiments)
Training evaluation usually includes a combination of these methods and reframes
our thinking about evaluation in that measurements are aimed at different levels of
a system.
Formative Evaluation
Formative Evaluation may be defined as any combination of measurements
obtained and judgments made before or during the implementation of materials,

methods, or programs to control, assure or improve the quality of program


performance or delivery.
It answers such questions as, Are the goals and objectives suitable for the
intended audience? Are the methods and materials appropriate to the event?
Can the event be easily replicated?
Formative evaluation furnishes information for program developers and
implementers.
It helps determine program planning and implementation activities in terms of (1)
target population, (2) program organization, and (3) program location and timing.
It provides short-loop feedback about the quality and implementation of program
activities and thus becomes critical to establishing, stabilizing, and upgrading
programs.
Process Evaluation
Process Evaluation answers the question, What did you do? It focuses on
procedures and actions being used to produce results.
It monitors the quality of an event or project by various means. Traditionally,
working as an onlooker, the evaluator describes this process and measures the
results in oral and written reports.
Process evaluation is the most common type of training evaluation. It takes place
during training delivery and at the end of the event.
Outcome Evaluation
Outcome Evaluation answers the question, What happened to the knowledge,
attitudes, and behaviors of the intended population?
This project would produce both outcomes and impacts.
Outcome evaluation is a long-term undertaking.
Outcome evaluation answers the question, What did the participants do?

Because outcomes refer to changes in behavior, outcome evaluation data is


intended to measure what training participants were able to do at the end of
training and what they actually did back on the job as a result of the training.
Impact Evaluation takes even longer than outcome evaluation and you may never
know for sure that your project helped bring about the change.
Impacts occur through an accumulation of outcomes.

Five Stages of Training Evaluation


Our Training Evaluation Model sets the framework for developing instruments. It
accommodates individual training programs based on the type of training, the
appropriate evaluation method, and the best way to implement the evaluation.
This model has five stages and is illustrated in the Training Evaluation Model
diagram. Each stage corresponds to specific data categories:
Describe the outputs. Outputs are descriptive data about the training programs
and participants, including demographic data.
Pre-training assessment. This step uncovers the participants past experience as
well as current competencies, learning needs, and expected application of learning.
Post-assessment (reactions). This addresses participants reactions to the training
experience for example, their learning environment, format and instructor
methods, general satisfaction.
Post-assessment (learning). This piece is a self-assessment of knowledge or skills
gained and the participants expected application of learning.
Follow-up. This process may include several methods to assess the outcomes and
effect of training programs over time.
Our model draws from principles in Donald Kirkpatricks four-level model, in
which evaluation questions fall into the following categories:
Reaction: How was the training overall? What did participants like and dislike?

Learning: What knowledge and abilities did participants learn at the training?
Behavior: How have participants applied the skills they learned?
Results: What was the effect on the agency or organization?

Approaches to Evaluation of Training


Commonly used approaches to educational evaluation have their roots in
systematic approaches to the design of training. They are typified by the
instructional system development (ISD) methodologies, which emerged in the
USA in the 1950s and 1960s and are represented in the works of Gagn and Briggs
(1974), Goldstein (1993), and Mager (1962). Evaluation is traditionally
represented as the final stage in a systematic approach with the purpose being to
improve interventions (formative evaluation) or make a judgment about worth and
effectiveness (summative evaluation) (Gustafson & Branch, 1997). More recent
ISD models incorporate evaluation throughout the process (see, for example,
Tennyson, 1999).
Six general approaches to educational evaluation can be identified (Bramley, 1991;
Worthen & Sanders, 1987), as follows:
Goal-based evaluation
Goal-free evaluation
Responsive evaluation
Systems evaluation
Professional review
Quasi-legal
Goal-based and systems-based approaches are predominantly used in the
evaluation of training (Philips, 1991). Various frameworks for evaluation of
training programs have been proposed under the influence of these two
approaches. The most influential framework has come from Kirkpatrick (Carnevale
& Schulz, 1990; Dixon, 1996; Gordon, 1991; Philips, 1991, 1997). Kirkpatricks

work generated a great deal of subsequent work (Bramley, 1996; Hamblin, 1974;
Warr et al., 1978). Kirkpatricks model (1959) follows the goal-based evaluation
approach and is based on four simple questions that translate into four levels of
evaluation. These four levels are widely known as reaction, learning, behavior, and
results. On the other hand, under the systems approach, the most influential models
include: Context, Input, Process, Product (CIPP) Model (Worthen & Sanders,
1987); Training Validation System (TVS) Approach (Fitz-Enz, 1994); and Input,
Process, Output, Outcome (IPO) Model (Bushnell, 1990).
Table 1 presents a comparison of several system-based models (CIPP, IPO, &
TVS) with a goal-based model (Kirkpatricks). Goal-based models (such as
Kirkpatricks four levels) may help practitioners think about the purposes of
evaluation ranging from purely technical to covertly political purpose. However,
these models do not define the steps necessary to achieve purposes and do not
address the ways to utilize results to improve training. The difficulty for
practitioners following such models is in selecting and implementing appropriate
evaluation methods (quantitative, qualitative, or mixed). Because of their apparent
simplicity, trainers jump feet first into using [such] model[s] without taking the
time to assess their needs and resources or to determine how theyll apply the
model and the results (Bernthal, 1995, p. 41). Naturally, many organizations do
not use the entire model, and training ends up being evaluated only at the reaction,
or at best, at the learning level. As the level of evaluation goes up, the complexities
involved increase. This may explain why only levels 1 and 2 are used.

Kirkpatrick
(1959)

CIPP Model (1987) IPO Model (1990)

TVS
(1994)

Model

1. Reaction: to 1.
Context: 1. Input: evaluation 1.
Situation:
gather data on obtaining
of
system collecting
preparticipants
information about performance
training data to
reactions at the end the situation to indicators such as ascertain current
of
a
training decide
on trainee qualifications, levels
of

program

educational
needs availability
and to establish materials,
program objectives appropriateness
training, etc.

of performance
within
the
of organization and
defining
a
desirable level of
future performance

2. Learning: to 2. Input: identifying 2. Process: embraces 2.


Intervention:
assess whether the educational
planning,
design, identifying
the
learning objectives strategies
most development,
and reason for the
for the program are likely to achieve the delivery of training existence of the
met
desired result
programs
gap between the
present
and
desirable
performance
to
find out if training
is the solution to
the problem

3. Behavior: to 3. Process: assessing 3. Output: Gathering 3.


Impact:
assess whether job the implementation data resulting from evaluating
the
performance
of the educational the
training difference between
changes as a result program
interventions
the pre- and postof training
training data

4. Results: to assess 4. Product: gathering 4. Outcomes:


costs vs. benefits of information
term
training programs, regarding the results associated
i.e., organizational of the educational improvement
impact in terms of intervention
to corporations

longer- 4.
Value:
results measuring
with differences
in
in the quality,
bottom productivity,

reduced
costs, interpret its worth line- its profitability, service, or sales,
improved quality of and merit
competitiveness, etc. all of which can be
work,
increased
expressed in terms
quantity of work,
of dollars
etc.
Table 1. Goal-based and systems-based approaches to evaluation
On the other hand, systems-based models (e.g., CIPP, IPO, and TVS) seem to be
more useful in terms of thinking about the overall context and situation but they
may not provide sufficient granularity. Systems-based models may not represent
the dynamic interactions between the design and the evaluation of training. Few of
these models provide detailed descriptions of the processes involved in each steps.
None provide tools for evaluation. Furthermore, these models do not address the
collaborative process of evaluation, that is, the different roles and responsibilities
that people may play during an evaluation process.

Current Practices in Evaluation of Training


Evaluation becomes more important when one considers that while American
industries, for example, annually spend up to $100 billion on training and
development, not more than 10 per cent of these expenditures actually result in
transfer to the job (Baldwin & Ford, 1988, p.63). This can be explained by reports
that indicate that not all training programs are consistently evaluated (Carnevale &
Shulz, 1990). The American Society for Training and Development (ASTD) found
that 45 percent of surveyed organizations only gauged trainees reactions to
courses (Bassi & van Buren, 1999). Overall, 93% of training courses are evaluated
at Level One, 52% of the courses are evaluated at Level Two, 31% of the courses
are evaluated at Level Three and 28% of the courses are evaluated at Level Four.
These data clearly represent a bias in the area of evaluation for simple and
superficial analysis.
This situation does not seem to be very different in Europe, as evident in two
European Commission projects that have recently collected data exploring
evaluation practices in Europe. The first one is the Promoting Added Value through
Evaluation (PAVE) project, which was funded under the European Commissions

Leonardo da Vinci program in 1999 (Donoghue, 1999). The study examined a


sample of organizations (small, medium, and large), which had signaled some
commitment to training and evaluation by embarking on the UKs Investors in
People (IiP) standard (Sadler-Smith et al., 1999). Analysis of the responses to
surveys by these organizations suggested that formative and summative
evaluations were not widely used. On the other hand, immediate and context
(needs analysis) evaluations are more widely used. In the majority of the cases, the
responsibility for evaluation was that of managers and the most frequently used
methods were informal feedback and questionnaires. The majority of respondents
claimed to assess the impact on employee performance (the learning level). Less
than one-third of the respondents claimed to assess the impact of training on
organization (the results level). Operational reasons for evaluating training were
cited more frequently than strategic ones. However, information derived from
evaluations was used mostly for feedback to individuals, less to revise the training
process, and rarely for return on investment decisions. Also, there were some
statistically significant effects of organizational size on evaluation practice. Small
firms are constrained in the extent to which they can evaluate their training by the
internal resources of the firm. Managers are probably responsible for all aspects of
training (Sadler-Smith et al., 1999).
The second study was conducted under the Advanced Design Approaches for
Personalized Training-Interactive Tools (ADAPTIT) project. ADAPTIT is a
European project within the Information Society Technologies programme that is
providing design methods and tools to guide a training designer according to the
latest cognitive science and standardisation principles(Eseryel & Spector, 2000). In
an effort to explore the current approaches to instructional design, a series of
surveys conducted in a variety of sectors including transport, education, business,
and industry in Europe. The participants were asked about activities that take place
including the interim products produced during the evaluation process, such as a
list of revisions or an evaluation plan. In general, systematic and planned
evaluation was not found in practice nor was the distinction between formative and
summative evaluation. Formative evaluation does not seem to take place explicitly
while summative evaluation is not fully carried out. The most common activities of
evaluation seem to be the evaluation of student performance (i.e., assessment) and
there is not enough evidence that evaluation results of any type are used to revise

the training design (Eseryel et al., 2001). It is important to note here that the
majority of the participants expressed a need for evaluation software to support
their practice.

Using Computer to Automate Evaluation Process


For evaluations to have a substantive and pervasive impact on the development of
training programs, internal resources and personnel such as training designers,
trainers, training managers, and chief personnel will need to become increasingly
involved as program evaluators. While using external evaluation specialists has
validity advantages, time and budget constraints make this option highly
impractical in most cases. Thus, the mentality that evaluation is strictly the
province of experts often results in there being no evaluation at all. These
considerations make a case for the convenience and cost-effectiveness of internal
evaluations. However, the obvious concern is whether the internal team possesses
the expertise required to conduct the evaluation, and if they do, how the bias of
internal evaluators can be minimized. Therefore, just as automated expert systems
are being developed to guide the design of instructional programs (Spector et al.,
1993), so might such systems be created for instructional evaluations. Lack of
expertise of training designers in evaluation, pressures for increased productivity,
and the need to standardize evaluation process to ensure effectiveness of training
products are some of the elements that may provide motivations for supporting
organizations evaluation with technology. Such systems might also help minimize
the potential bias of internal evaluators.
Ross & Morrison (1997) suggest two categories of functions that automated
evaluation systems appear likely to incorporate. The first is automation of the
planning process via expert guidance; the second is the automation of the data
collection process.
For automated planning through expert guidance, an operational or procedural
model can be used during the planning stages to assist the evaluator in planning an
appropriate evaluation. The expert program will solicit key information from the
evaluator and offer recommendations regarding possible strategies. Input
information categories for the expert system include:

Purpose of evaluation (formative or summative)


Type of evaluation objectives (cognitive, affective, behavioral, impact)
Level of evaluation (reaction, learning, behavior, organizational impact)
Type of instructional objectives (declarative knowledge, procedural learning,
attitudes)
Type of instructional delivery (classroom-based, technology-based, mixed)
Size and type of participant groups (individual, small group, whole group)
Based on this input, an expert system can provide guidance on possible evaluation
design orientations, appropriate collection methods, data analysis techniques,
reporting formats, and dissemination strategies. Such expert guidance can be in the
form of flexible general strategies and guidelines (weak advising approach). Given
the complexities associated with the nature of evaluation, a weak advising
approach such as this is more appropriate than a strong approach that would
replace the human decision maker in the process. Indeed, weak advising systems
that supplement rather than replace human expertise have generally been more
successful when complex procedures and processes are involved (Spector et al.,
1993).
Such a system may also embed automated data collection functions for increased
efficiency. Functionality of automated data collection systems may involve
intelligent test scoring of procedural and declarative knowledge, automation of
individual profile interpretations, and intelligent advice during the process of
learning (Bunderson et al., 1989). These applications can provide increased ability
to diagnose the strengths and weaknesses of the training program in producing the
desired outcomes. Especially, for the purposes of formative evaluation this means
that the training program can be dynamically and continuously improved as it is
being designed.
Automated evaluation planning and automated data collection systems embedded
in a generic instructional design tool may be an efficient and integrated solution for
training organizations. In such a system it will also be possible to provide advice
on revising the training materials based on the evaluation feedback. Therefore,

evaluation data, individual performance data, and revision items can be tagged to
the learning objects in a training program. ADAPT IT instructional design tool is one
of the systems that provide such an integrated solution for training organizations
(Eseryel et al., 2001).

Conclusion
Different approaches to evaluation of training discussed herein indicate that the
activities involved in evaluation of training are complex and not always wellstructured. Since evaluation activities in training situations involve multiple goals
associated with multiple levels, evaluation should perhaps be viewed as a
collaborative activity between training designers, training managers, trainers, floor
managers, and possibly others.
There is a need for a unifying model for evaluation theory, research, and practice
that will account for the collaborative nature of and complexities involved in the
evaluation of training. None of the available models for training evaluation seem to
account for these two aspects of evaluation. Existing models fall short in
comprehensiveness and they fail to provide tools that guide organizations in their
evaluation systems and procedures. Not surprisingly, organizations are
experiencing problems with respect to developing consistent evaluation
approaches. Only a small percentage of organizations succeed in establishing a
sound evaluation process that feeds back into the training design process.
Evaluation activities are limited to reaction sheets and student testing without
proper revision of training materials based on evaluation results. Perhaps lack of
experience in evaluation is one of the reasons for not consistently evaluating. In
this case, the organization may consider hiring an external evaluator, but that will
be costly and time consuming. Considering the need for the use of internal
resources and personnel in organizations, expert system technology can be useful
in providing expert support and guidance and increase the power and efficiency of
evaluation. Such expert systems can be used by external evaluators as well.
Strong, completely automated systems offer apparent advantages, but their
development and dissemination lag behind their conceptualization. Future research
needs to focus on the barriers to evaluation of training, how training is being

evaluated and integrated with the training design, how the collaborative process of
evaluation is being managed and how they may be assisted. This will be helpful in
guiding the efforts for both the unifying theory of evaluation and in developing
automated evaluation systems.

You might also like