Definition of Training
Definition of Training
Definition of Training:
Dale S. Beach defines training as the organized procedure by which people learn
knowledge and/or skill for a definite purpose. Training refers to the teaching and
learning activities carried on for the primary purpose of helping members of an
organization acquire and apply the knowledge, skills, abilities, and attitudes needed by
a particular job and organization.
According to Edwin Flippo, training is the act of increasing the skills of an employee
for doing a particular job.
coordination. So, in order to cope up with the complexities, training has become
mandatory.
3. Human relations:
Every management has to maintain very good human relations, and this has made
training as one of the basic conditions to deal with human problems.
4. To match employee specifications with the job requirements and
organizational needs:
An employees specification may not exactly suit to the requirements of the job and
the organization, irrespective of past experience and skills. There is always a gap
between an employees present specifications and the organizations requirements. For
filling this gap training is required.
5. Change in the job assignment:
Training is also necessary when the existing employee is promoted to the higher level
or transferred to another department. Training is also required to equip the old
employees with new techniques and technologies.
Importance of Training:
Training of employees and mangers are absolutely essential in this changing
environment. It is an important activity of HRD which helps in improving the
competency of employees. Training gives a lot of benefits to the employees such as
improvement in efficiency and effectiveness, development of self confidence and
assists every one in self management.
The stability and progress of the organization always depends on the training imparted
to the employees. Training becomes mandatory under each and every step of
expansion and diversification. Only training can improve the quality and reduce the
wastages to the minimum. Training and development is also very essential to adapt
according to changing environment.
Types of Training:
Various types of training can be given to the employees such as induction training,
refresher training, on the job training, vestibule training, and training for promotions.
Some of the commonly used training programs are listed below:
1. Induction training:
Also known as orientation training given for the new recruits in order to make them
familiarize with the internal environment of an organization. It helps the employees to
understand the procedures, code of conduct, policies existing in that organization.
2. Job instruction training:
This training provides an overview about the job and experienced trainers
demonstrates the entire job. Addition training is offered to employees after evaluating
their performance if necessary.
3. Vestibule training:
It is the training on actual work to be done by an employee but conducted away from
the work place.
4. Refresher training:
This type of training is offered in order to incorporate the latest development in a
particular field. This training is imparted to upgrade the skills of employees. This
training can also be used for promoting an employee.
5. Apprenticeship training:
Apprentice is a worker who spends a prescribed period of time under a supervisor.
Evaluation Meaning
Evaluation is an integral part of most instructional design (ID) models. Evaluation
tools and methodologies help determine the effectiveness of instructional
interventions. Despite its importance, there is evidence that evaluations of training
programs are often inconsistent or missing (Carnevale & Schulz, 1990; Holcomb,
1993; McMahon & Carter, 1990; Rossi et al., 1979). Possible explanations for
inadequate evaluations include: insufficient budget allocated; insufficient time
allocated; lack of expertise; blind trust in training solutions; or lack of methods and
tools (see, for example, McEvoy & Buller, 1990).
Part of the explanation may be that the task of evaluation is complex in itself.
Evaluating training interventions with regard to learning, transfer, and
organizational impact involves a number of complexity factors. These complexity
factors are associated with the dynamic and ongoing interactions of the various
dimensions and attributes of organizational and training goals, trainees, training
situations, and instructional technologies.
Evaluation goals involve multiple purposes at different levels. These purposes
include evaluation of student learning, evaluation of instructional materials,
transfer of training, return on investment, and so on. Attaining these multiple
purposes may require the collaboration of different people in different parts of an
organization. Furthermore, not all goals may be well-defined and some may
change.
Different approaches to evaluation of training indicating how complexity factors
associated with evaluation are addressed below. Furthermore, how technology can
be used to support this process is suggested. In the following section, different
approaches to evaluation and associated models are discussed. Next, recent studies
concerning evaluation practice are presented. In the final section, opportunities for
automated evaluation systems are discussed. The article concludes with
recommendations for further research.
Evaluation involves the assessment of the effectiveness of the training
programs. This assessment is done by collecting data on whether the participants
were satisfied with the deliverables of the training program, whether they learned
something from the training and are able to apply those skills at their workplace.
There are different tools for assessment of a training program depending upon the
kind of training conducted.
There are many methods and tools available for evaluating the effectiveness of
training programs. Their usability depends on the kind of training program that is
under evaluation. Generally most of the organisations use the Kirk Patrick model
for training evaluations which evaluates training at four levels - reactions, learning,
behaviour and results.
After it was found out that training costs organisations a lot of money and no
evaluation measures the return on investment for training, the fifth level for
training evaluation was added to the training evaluation model by Kirk Patrick
which is called as the ROI.
Most of the evaluations contain themselves to the reaction data, only few collected
the learning data, still lesser measured and analysed the change in behaviour and
very few took it to the level of increase in business results. The evaluation tools
including the Kirk Patrick model will be discussed in detail in other articles.
The process of examining a training program is called training evaluation. Training
evaluation checks whether training has had the desired effect. Training evaluation
ensures that whether candidates are able to implement their learning in their
respective workplaces, or to the regular work routines
Techniques of Evaluation
The various methods of training evaluation are:
Observation
Questionnaire
Interview
Self diaries
Self recording of specific incidents
Types of evaluation
Evaluating the Training (includes monitoring) addresses how one determines
whether the goals or objectives were met and what impact the training had on
actual performance on the job.
Generally there are four kinds of standard training evaluation:
Formative
Process
Outcome
Impact.
Formative evaluation provides ongoing feedback to the curriculum designers and
developers to ensure that what is being created really meets the needs of the
intended audience.
Process evaluation provides information about what occurs during training. This
includes giving and receiving verbal feedback.
Outcome evaluation determines whether or not the desired results (e.g., what
participants are doing) of applying new skills were Achieved in the short-term.
Impact determines how the results of the training affect the strategic goal
Evaluation Methods
Evaluation methods can be either qualitative (e.g., interviews, case studies, focus
groups) or quantitative (e.g., surveys, experiments)
Training evaluation usually includes a combination of these methods and reframes
our thinking about evaluation in that measurements are aimed at different levels of
a system.
Formative Evaluation
Formative Evaluation may be defined as any combination of measurements
obtained and judgments made before or during the implementation of materials,
Learning: What knowledge and abilities did participants learn at the training?
Behavior: How have participants applied the skills they learned?
Results: What was the effect on the agency or organization?
work generated a great deal of subsequent work (Bramley, 1996; Hamblin, 1974;
Warr et al., 1978). Kirkpatricks model (1959) follows the goal-based evaluation
approach and is based on four simple questions that translate into four levels of
evaluation. These four levels are widely known as reaction, learning, behavior, and
results. On the other hand, under the systems approach, the most influential models
include: Context, Input, Process, Product (CIPP) Model (Worthen & Sanders,
1987); Training Validation System (TVS) Approach (Fitz-Enz, 1994); and Input,
Process, Output, Outcome (IPO) Model (Bushnell, 1990).
Table 1 presents a comparison of several system-based models (CIPP, IPO, &
TVS) with a goal-based model (Kirkpatricks). Goal-based models (such as
Kirkpatricks four levels) may help practitioners think about the purposes of
evaluation ranging from purely technical to covertly political purpose. However,
these models do not define the steps necessary to achieve purposes and do not
address the ways to utilize results to improve training. The difficulty for
practitioners following such models is in selecting and implementing appropriate
evaluation methods (quantitative, qualitative, or mixed). Because of their apparent
simplicity, trainers jump feet first into using [such] model[s] without taking the
time to assess their needs and resources or to determine how theyll apply the
model and the results (Bernthal, 1995, p. 41). Naturally, many organizations do
not use the entire model, and training ends up being evaluated only at the reaction,
or at best, at the learning level. As the level of evaluation goes up, the complexities
involved increase. This may explain why only levels 1 and 2 are used.
Kirkpatrick
(1959)
TVS
(1994)
Model
1. Reaction: to 1.
Context: 1. Input: evaluation 1.
Situation:
gather data on obtaining
of
system collecting
preparticipants
information about performance
training data to
reactions at the end the situation to indicators such as ascertain current
of
a
training decide
on trainee qualifications, levels
of
program
educational
needs availability
and to establish materials,
program objectives appropriateness
training, etc.
of performance
within
the
of organization and
defining
a
desirable level of
future performance
longer- 4.
Value:
results measuring
with differences
in
in the quality,
bottom productivity,
reduced
costs, interpret its worth line- its profitability, service, or sales,
improved quality of and merit
competitiveness, etc. all of which can be
work,
increased
expressed in terms
quantity of work,
of dollars
etc.
Table 1. Goal-based and systems-based approaches to evaluation
On the other hand, systems-based models (e.g., CIPP, IPO, and TVS) seem to be
more useful in terms of thinking about the overall context and situation but they
may not provide sufficient granularity. Systems-based models may not represent
the dynamic interactions between the design and the evaluation of training. Few of
these models provide detailed descriptions of the processes involved in each steps.
None provide tools for evaluation. Furthermore, these models do not address the
collaborative process of evaluation, that is, the different roles and responsibilities
that people may play during an evaluation process.
the training design (Eseryel et al., 2001). It is important to note here that the
majority of the participants expressed a need for evaluation software to support
their practice.
evaluation data, individual performance data, and revision items can be tagged to
the learning objects in a training program. ADAPT IT instructional design tool is one
of the systems that provide such an integrated solution for training organizations
(Eseryel et al., 2001).
Conclusion
Different approaches to evaluation of training discussed herein indicate that the
activities involved in evaluation of training are complex and not always wellstructured. Since evaluation activities in training situations involve multiple goals
associated with multiple levels, evaluation should perhaps be viewed as a
collaborative activity between training designers, training managers, trainers, floor
managers, and possibly others.
There is a need for a unifying model for evaluation theory, research, and practice
that will account for the collaborative nature of and complexities involved in the
evaluation of training. None of the available models for training evaluation seem to
account for these two aspects of evaluation. Existing models fall short in
comprehensiveness and they fail to provide tools that guide organizations in their
evaluation systems and procedures. Not surprisingly, organizations are
experiencing problems with respect to developing consistent evaluation
approaches. Only a small percentage of organizations succeed in establishing a
sound evaluation process that feeds back into the training design process.
Evaluation activities are limited to reaction sheets and student testing without
proper revision of training materials based on evaluation results. Perhaps lack of
experience in evaluation is one of the reasons for not consistently evaluating. In
this case, the organization may consider hiring an external evaluator, but that will
be costly and time consuming. Considering the need for the use of internal
resources and personnel in organizations, expert system technology can be useful
in providing expert support and guidance and increase the power and efficiency of
evaluation. Such expert systems can be used by external evaluators as well.
Strong, completely automated systems offer apparent advantages, but their
development and dissemination lag behind their conceptualization. Future research
needs to focus on the barriers to evaluation of training, how training is being
evaluated and integrated with the training design, how the collaborative process of
evaluation is being managed and how they may be assisted. This will be helpful in
guiding the efforts for both the unifying theory of evaluation and in developing
automated evaluation systems.