Training As A Development Tool
Training As A Development Tool
Cecilia Otero
September 1997
Research and Reference Services Project is operated by the Academy for Educational Development
TRAINING AS A DEVELOPMENT TOOL
TABLE OF CONTENTS
Introduction
Highlights
Methodology
THEORY
Evaluation Models 15
- Donald Kirkpatrick
- Robert Brinkerhoff
- Human and Educational Resources Network Support Project
Training Indicators 32
- Criteria for developing indicators
Comments 42
PRACTICE
- Economic Growth 44
Privatization of Industries and Agriculture in Tajikistan
USAID/Central Asia
- Health 53
Improved Quality with Equity in Health, USAID/El Salvador
- Education 60
Increased Girls’ Basic Education, USAID/Morocco
- Training 65
Enhancing the Human Resource Base of Nongovernmental
and Governmental Organizations, USAID/Namibia
Resources 74
Training-Related Internet Sites 75
Acknowledgements
I am indebted to many individuals for their assistance and substantive input in this
study. I would like to express my deep appreciation to John Jessup,
USAID/G/HCD, whose invaluable expertise, advice, and encouragement guided
the research and writing of the study. Wendy Putman and Stacey Cramp, RRS,
provided their expert and careful editing skills. The RRS review team comprised
of Nick Mills, Dana Wichterman, and Anne Langhaug made thoughtful
suggestions as to the overall format and content. The names and affiliations of the
training specialists who collaborated in the preparation of the case studies are
listed under the section Training in Support of Strategic Objectives on page 43.
ACRONYMS
TABLES
In 1993 President Clinton created the National Performance Review (NPR) to reform
the practices and procedures employed by the federal government. With Vice President
Gore as its leader, the NPR initiative led to the Government Performance and Results
Act of 1993, which requires federal agencies to develop strategic plans, prepare annual
performance plans, and submit performance reviews. The United States Agency for
International Development (USAID) was selected as a "Reinvention Laboratory"
within NPR’s initiatives.
In the light of this directive, the Agency has reassessed the role of training in the
context of reengineered systems, i.e., to expand the function of training beyond
individual achievement to organizational improvement. This task involves a significant
shift from past training practices and requires developing appropriate design,
implementation, and evaluation tools to integrate training programs strategically in
support of other activities.
The purpose of this study is to examine how reengineering concepts and principles
apply to the training function and to provide trainers with approaches, ideas, and
strategies to design quality integrated programs, as well as monitor and measure
results. Five case studies representing different sector areas—economic growth,
democracy and governance, health, education, and training—are also included as
examples of reengineered training designs.
Highlights
This study is primarily intended to serve as a guide to USAID strategic objective (SO) teams
and training specialists as they confront the challenge of applying reengineering concepts and
integrating training activities into Missions’ strategic objectives. Its aim is to assist field
training staff in clarifying their function within SO teams; reiterate the crucial role they play
in demonstrating the link between training and strategic objectives; and examine useful
techniques and strategies that they can adapt to their own programs.
The Best Practices modules developed by the Human Resource Development Assistance
Project provide a detailed explanation, with a wide array of examples and illustrations, of the
major components and activities that comprise strategic training. Thus, this paper focuses
primarily on three topics: a discussion of training evaluation models, mechanisms for
developing training indicators as well as monitoring and measurement tools, and sector case
studies presented as examples of innovative training programs designed strategically. Every
effort was made in discussing each of these topics to bridge the gap between the theoretical
concepts stated in the reengineering initiatives and the reality experienced by those charged
with the implementation of reengineered training systems.
The first part of the study entitled Theory comprises four sections:
Evaluation Models synthesizes three commonly used training evaluation models developed by
Donald Kirkpatrick, Robert Brinkerhoff, and the USAID-funded HERNS project.
Training Indicators offers a general discussion of indicators and provides strategies and
recommendations for developing training indicators.
Additional Monitoring and Measurement Tools includes tools and mechanisms to isolate the
effect of training on performance improvement and calculate results in financial terms.
The second part of the study, Practice, presents five case studies of integrated training
programs designed strategically in support of other activities—economic growth, democracy
and governance, health, education, and training. The aim is to illustrate how reengineering
concepts and approaches are being applied in various sectors and avail training specialists of
the experiences and expertise of others. A discussion of training results is also presented, as
well as a description of TraiNet, a database system designed to record and report training
activities and results.
Methodology
This study was prepared as a response to a request from USAID’s Center for Human
Capacity Development of the Global Bureau to identify training case studies and develop
monitoring and evaluation tools. The research involved a review of key documents dealing
with USAID reengineering concepts and practices; a review of major evaluations of USAID-
sponsored participant training programs; consultation with several training and evaluation
specialists; and searches in education\training databases and Internet web pages.
The case studies were prepared with the assistance of the training officers in the respective
Missions or their training contractors who provided the data and approved the final versions.
While the extensive interaction with these individuals was a most rewarding experience, it
was extremely time-consuming and not an efficient way of identifying and collecting the
information required. Prior to the transition from project-level activity to strategic objectives,
a significant amount of information related to projects and documents could be retrieved
through the USAID bibliographic database. Activities implemented under reengineering no
longer have an identifier, i.e., project number, and most of the information generated remains
in the field. To address this situation, G/HCD has developed an Agency-wide information
system, The Training Results and Information Network, TraiNet, to be used by field or
stateside training staff. It provides a standard mechanism for consistent data input and
collection that responds to reengineering guidelines. We must underscore the importance of
updating and maintaining this database on a regular basis as it will become the central
repository for USAID training-related information. (See last section, Documenting Training
Practices, for a more detailed explanation of TraiNet).
Training in a Reengineered Context
The United States Agency for International Development has long held the belief that
developing the human resource base of countries is a critical element in promoting sustainable
development. The variety and richness of training programs funded since the early 1960s
throughout the world attests to this belief. Stated in general terms, the overriding purpose of
these programs was to upgrade the skills and knowledge of participants who were selected
based on their personal merit or leadership potential. An effort was made to promote the
participation of women, indigenous peoples, and other disadvantaged groups.
Through these programs, effective and useful methodologies and evaluation tools were
developed and refined. The lessons and guidance they provide prepared the groundwork for
the design and implementation of strategic programs and allowed training specialists to
formulate agency-wide norms. Thus, as we discuss strategic training, we should emphasize
that we are not discarding many of the important and necessary elements developed in the
traditional training methodologies. In a reengineered Agency, "We must enhance the
traditional approach by shifting our emphasis to a results-oriented approach to training." (Best
Practices: 3)
The concepts and initiatives set forth by reengineering have reshaped the way the Agency
approaches and views development work. The Strategic Framework mandates that all country
activities show linkage to SOs and to Agency goals and objectives. For the training function,
this means that plans must be linked to technical activities of results packages. Thus,
reengineering has forced training to become more rigorous in responding to customer needs,
selecting participants, and measuring results. Its outcome will no longer be measured in terms
of number of people trained or their satisfaction with the training they received but on how
the training activity contributes to performance improvement and supports technical assistance
programs.
In redefining the role of training, the focus is on the functions, rather than the person, and
one must be prepared to address a totally different set of issues, such as: How did the
investment in training contribute to specific program outcomes or strategic goals? What
linkages can be established between training and the achievement of USAID’s strategies?
And what are the implications of these linkages in the approach to training?
The definition that we ascribe to training impact in a reengineered context must reflect the
new concepts being formulated. It will guide the principles and approaches we must follow in
a results-oriented training system and determine how all other aspects of the training process
are planned and managed from selection to evaluation. The forthcoming update of the
ADS 253 (Automated Directives System) provides a definition of training impact that focuses
on a functional approach:
Training professionals could argue that this definition is a restrictive, narrow one and excludes
the widely accepted notion that the purpose of training is to transfer knowledge. This may well
be a valid statement depending on the objectives of the training. However, if we are to view
training as a development tool used to achieve a strategic objective, we can no longer use the
concept of upgrading skills or imparting knowledge as the sole criterion for assessing its impact
or concluding that the training was a success. We must look beyond the individual attainment
and be able to assess, in quantifiable terms, how the investment in training contributed to
specific program outcomes. Thus, embedded in this definition is the concept that training does
not have an impact until the skills or knowledge acquired have been successfully applied in a
specified work situation and have resulted in a measurable improvement.
In The Learning Alliance, Robert Brinkerhoff asks the key question: "How will the training
design result in knowledge and behavior that lead to performance improvement that achieves
the goals of the organization?" The issues raised in this question, as well as in the definition
of training impact, illustrate the interconnection among the various components of a training
program. There is a sequential progression: Each phase builds on the previous one and
influences the decisions taken at the next level. Let us then briefly examine the implications
of these concepts at each of the phases of the training continuum, from needs assessment and
selection to evaluation.1
The individuals selected to receive training are those who will perform the jobs that
1
See Best Practices Guide and companion subguides 1996 for an indepth discussion on how to plan,
manage, and monitor the various components of training in a reengineered context.
Training designs will spell out the specific results that the training is intended to have, and
questionnaires, surveys, and evaluation tools will be designed based on the expected results.
In the traditional approach, it was assumed that returned participants would apply the new
skills and knowledge acquired, achieve professional gains, and make contributions to their
communities and society at large. The specific results expected, however, were not always
articulated. Thus, evaluators were forced to seek out returned participants, identify their
accomplishments, and report whatever results they observed.
Reengineering guidelines and practices call for a greater level of accountability. The same
rigor that should guide the design and implementation phases of a training program needs to
be observed when documenting and reporting results. Under a results framework, regular
monitoring is conducted, and adjustments are made periodically at the design and
implementation levels. Success is measured in terms of performance deficits addressed and
documented improvements that directly link to the strategic objectives.
Identifying these intended results at the design stage assists all those involved in the training
effort: providers are given a clear idea of the larger goals behind their particular program;
participants benefit from a program with clear expectations, objectives, and defined
applicability to their work; and USAID and its contractors have clear benchmarks with which
to measure program results.
Mission training specialists play a pivotal role in this effort. They must thoroughly familiarize
themselves with USAID reengineering concepts and practices, particularly as they are applied
in their respective Missions. The challenge for them lies in restructuring their function and
gaining recognition from SO teams that human resource development is at the core of
achieving sustainable results in each of the strategic areas.
Training specialists are expected to assist SO teams in aligning training activities with specific
objectives; justify the need for a timely training intervention; work closely with partner
institutions to support continuous staff development and learning; monitor and measure
improvements; and report results. The knowledge, expertise, and background that they bring
to this endeavor will be in great demand and certainly tested. This new and challenging
function requires a shift from planning and managing training as a process to shaping and
improving performance in support of organizational goals; a shift from being training
managers to providing strategic input. Thus, the main purpose of this study is to provide these
specialists with practical and useful tools, techniques, or approaches for the effective
stewardship required to design quality programs.
Table I on the next two pages synthesizes and contrasts the differences between traditional
and reengineered training practices discussed above.
Needs Assessment
General inventory of training needs was conducted. Needs that address specific gaps in job performance
are identified.
Objectives
Training was the objective. Training is one of several development tools used
to achieve a strategic objective.
Objectives were not linked to program goals; Objectives show direct linkage to program goals.
they were defined as learning results.
To strengthen the organization or institution, Strives to improve the individual and the
or provide general institutional building. organizational performance through the application
of new knowledge, skills, and attitudes.
Selection
Participants were selected based on individual Participants identified for training are those who
merit, ability, or leadership qualities. will perform the jobs that will contribute to the
organizational improvement. A critical mass of
people is selected for maximum impact.
Design/Implementation
Historic preference for U.S.-based training. Choice of training, location, and duration should
match real needs.
Training designs were based on the number of Design is targeted and based on the need to
people trained or the needs of the participants. upgrade the performance of the institution.
Training content shows linkage to strategic
objectives.
Evaluation
Number of people trained was the indicator. SO teams identify training indicators prior to
training, i.e., agree on the changes that training will
bring.
Was based in terms of outputs, such as number Is evaluated in terms of results, i.e., the
of people trained; or inputs, such as number of improvements participants have on job performance
courses offered. or on the organization.
Quality of training was assessed based on Results are assessed in terms of customer needs.
participant satisfaction and individual results Evaluates changes in specific performance areas,
achieved. such as, productivity, efficiency, safety, quality
of service.
Learning results and impact were not specified. Requires baseline data and targets. Indicates
measurement of improvement and results.
Training was the sole responsibility of the training Training specialists are integrated into SO teams
office. and together they participate in the planning,
implementation, and monitoring of training.
Training specialists managed the training function They become strategic partners. They assess
and provided a specialized service. needs, monitor progress, and report results.
Partner organizations had little input in the planning Customers (participants, supervisors) provide input
of training activities. and are directly involved in the planning and
implementation phases of all training components.
The benefit of training results to partner Partner organizations are fully aware of the benefits
organizations was not always specified. derived from training its staff.
Participant alone was responsible for applying new Application of new skills is the responsibility
skills. of the customers as well.
Four core values have guided the Agency’s effort to restructure its operating systems: managing
for results, teamwork, customer focus, and empowerment and accountability. It is important to
analyze how we can use these values to guide our thinking and decision-making process; what
they mean in terms of planning and deciding training activities; and how they apply to the
various phases of the training process. Following is a brief description of the core values and
their application to training:
Managing for results - Means developing results-oriented strategic objectives and performance
indicators, routinely collecting and analyzing data to monitor program results, and using this
information to allocate resources.
For the training function, this means that objectives address the skills that need to be improved in
the workplace or organization. Mechanisms for collecting regular feedback allow for timely
changes in the design or implementation phases and provide an analytic base for measuring and
using results. Budget decisions are made based on results—on the actual improvements made.
Teamwork - Missions will establish strategic objective teams to design and manage their
programs. SO teams have the freedom and authority to plan their own activities and set goals.
In the training context, teams comprise training specialists, partners, customers, participants, or
beneficiaries, who develop the objectives and indicators and are in charge of monitoring and
conducting periodic evaluation activities.
Customer focus - The customer is involved in defining the activities that will best address their
needs. This means that Missions must include customers as part of the SO team, and they must
participate in all phases of program development.
In planning training, the customer is directly involved in assessing the organization’s performance
gaps; identifying the skills that need to be upgraded or acquired to address these gaps; selecting
the group of employees that need to be trained; deciding on the most appropriate training design;
and participating in the monitoring and feedback process.
This level of participation may be deemed too involved and time consuming. Supervisors, in
particular, may feel that it detracts from other more pressing managerial functions. But the
importance of direct customer involvement and participation cannot be overemphasized. Building
the human resource base not only allows the customer and the organization to keep pace with
change, but it also provides a significant source of competitive advantage.
Empowerment and accountability - USAID/Washington will set directions and provide guidelines,
but field Missions will decide how to implement them. Training teams will have the
responsibility for allocating, managing, monitoring, and reporting on the resources expended.
The following table presents a graphic illustration of how the four core values are applied at each
of the major phases of the training process.
Training Process
Core Values
Design/Planning Implementation Monitoring & Evaluation
Managing - Training objectives are defined in - Training is implemented with focus -Results are reported in quantifiable
for Results terms of performance gaps. on skill improvement. terms and are linked to SOs.
- Training indicators have been - Timely adjustments to design and - Results are linked to budget
established. implementation are based on regular decisions.
feedback.
- Reduced reporting requirements
facilitate focus on results.
Teamwork -TEAM* develops and agrees on - TEAM input is sought to implement - M&E process and activities are
objectives and indicators. and manage project. conducted by TEAM at regular
intervals.
Customer - Customer defines needs, i.e., - Customer is involved in - Customer experiences improvement
Focus performance gaps that training can implementation process and provides in individual job and in
address. regular input. organizational performance.
Empowerment - Training resources are allocated by - Implementation of training activities - TEAM has responsibility for M&E
and SO teams. is delegated to TEAM. process.
Accountability
- SO team reports results in R4.
*TEAM refers to SO team, training specialists, partners, customers, participants, or beneficiaries.
If we are to rethink the role of training in a strategic context and assess its impact beyond
individual attainment, then the tools we use to measure results must reflect the new practices
being implemented under reengineering. How do we measure the effectiveness of training in
terms of results rather than inputs? How do we know if training is the appropriate medium for
meeting the performance gaps identified? Training professionals confront these issues
increasingly as they are pressured both to redefine the training function and justify their
investments. Regular monitoring and evaluation practices are not only the best means of
providing such justification but also an essential component of all training designs.
This section summarizes three training evaluation models designed by Donald Kirkpatrick,
Robert Brinkerhoff, and the USAID-funded HERNS project (Human and Educational
Resources Network Support). It is beyond the scope of this study to provide detailed
evaluation tools or techniques for the various levels of assessment described in each of the
models. The intent is rather to present criteria and guidelines useful in designing quality
monitoring mechanisms and to synthesize the issues—from a conceptual point of
view—related to measuring training effectiveness and efficiency.
It should be noted, however, that only the HERNS model was specifically designed to
evaluate USAID training programs. The concept of preparing change agents—individuals who
exert influence beyond the workplace, at the program level or in their communities and
society at large—is used by the HERNS model to measure impact at the highest level.
While the Kirkpatrick and Brinkerhoff models do not measure results beyond the workplace,
they do provide practical and valuable strategies, insights, explanations, or solutions
applicable to development training programs. Considering the numerous differences in
programs, structures, and work environments in which SO teams operate, a single monitoring
and evaluation method, most probably, will not address all the issues related to the training
event. A sound and more reliable strategy is to establish monitoring mechanisms that will
permit training staff to collect, analyze, and report results on a regular basis. In other words,
allow the strengths and uses that each model offers to reenforce the various needs of the
training programs as best suited. The level and degree of effort expended at each level of
evaluation naturally depends on the needs, requirements, and resources of the program.
Donald Kirkpatrick outlined the four levels of his widely used evaluation model in a series of
articles published in 1959. Often acclaimed for its practicality and simplicity, Kirkpatrick’s
model has certainly withstood the test of time. In the subsequent decades since the publication
of the articles, training and evaluation professionals have frequently quoted, applied, modified,
and expanded this model. And despite the numerous changes and innovations that training
concepts and designs have undergone over the years, this model continues to be a useful and
effective tool.
In 1994, Kirkpatrick published, Evaluation Training Programs, the Four Levels in which he
explains the concepts put forth in his series of articles and provides techniques along with a
set of guidelines for evaluating each level. The second part of the book provides case studies
of organizations that have used evaluation at different levels.
Level 1 - Reaction
Level 2 - Learning
Level 3 - Behavior
Level 4 - Results
The author cautions that each level is important and none should be skipped in favor of the
level that is deemed most useful. Each level of evaluation provides essential information for
assessing the effectiveness of the following level. The motivation and interest of the trainees
(Level 1 - Reaction), for instance, has a direct influence on the level of learning that takes
place (Level 2 - Learning). Likewise, the amount of learning that takes place influences the
behavior (Level 3) of the person, without which there would be no results (Level 4). The
higher the level, the more involved, costly, and challenging the process becomes to
accomplish and assess.
Reaction - Level 1
Reaction measures the level of trainee satisfaction as to the location, training content, or
effectiveness of the trainer. If the trainees do not have a positive reaction to these, they will
not respond favorably to the material presented or skills taught. Thus, it is crucial to assess
the level of satisfaction of the participants at regular intervals during the training and make
the necessary adjustments based on the feedback received. The motivation and interest of the
trainees has a direct influence on the amount and level of learning that takes place.
A short, yet well-constructed questionnaire should provide the necessary information to assess
Level 1 results. This is a relatively easy task, and one should aim to get a 100 percent
response. A positive response will not guarantee that participants will apply the content of the
training in the workplace, but a negative reaction, most likely, will prevent trainees from
going beyond this level.
Learning - Level 2
Kirkpatrick contends that learning takes place when attitudes are changed, knowledge is
increased, or skills are improved. Learning to use a new software program, for instance,
increases the skill level, while a program aimed at enhancing male involvement in family
planning would deal with cultural differences and seek to change attitudes. An evaluation tool
aimed at measuring learning must take into account the specific objectives of the training.
Kirkpatrick makes the point that when evaluating learning, we are also measuring the
effectiveness of the trainers. If the results at this level are not satisfactory, we may also need
to assess the training venues, as well as the expertise and training skills of the staff.
Behavior - Level 3
This level tests whether participation in training has resulted in changes in behavior.
Participants may be asked to provide specific examples of how training has affected their job
performance. Kirkpatrick emphasizes the importance of evaluating levels one and two before
attempting to measure changes in behavior.
For change in behavior to occur, two key conditions must be present: The person must have
an opportunity to put into practice the skills acquired and must encounter a favorable work
climate. The training program can teach the necessary skills and provide a conducive
environment to change, but providing the right climate is the responsibility of the participant’s
immediate supervisor. If learning took place, but no changes in behavior are observed, it may
be that the person does not have a supportive environment, or work conditions prevent
him/her from applying the new skills. Likewise, if the individual has shown improvement in
job performance, but no improvement is evident in the organization, then the climate of the
organization should be analyzed to assess the causes, rather than the training. All these are
important variables to consider before deciding whether or not the training has produced the
expected results at this level.
Results - Level 4
The first three levels assess the degree to which participants are pleased with the program,
acquire knowledge, and apply it to their jobs. Level 4 attempts to measure the final results
that took place due to participation in the training.
This is the most difficult level to evaluate and requires considerable time, skill, and resources.
At this level we measure the benefits to the organization that resulted from training. There are
numerous ways of measuring results: increased efficiency, reduced costs, better quality,
enhanced safety, greater profits. Again, the final objectives of the training program must be
defined in terms of the results expected. Kirkpatrick, however, does not address the fact that
impact measurement must take into account that other variables affect performance besides
training. (See section on Monitoring and Measurement Tools.)
The chart below illustrates the chain of impact of each of the four evaluation levels based on
the value of the information that each level provides, the power to show results, the frequency
of use, and the difficulty of assessment. Level 4 evaluation yields more valuable information
and has a greater power to show results than the other levels. Level I evaluations are fairly
common, but less frequent at Level 4 largely due to the level of difficulty to administer and
assess. (Phillips 1994: 7)
_________________________________________________________
↓ ↓ ↓ ↑ ↓
Learning ↓ ↓ ↑ ↓
(Level 2)
↓ ↓ ↓ ↑ ↓
Behavior ↓ ↓ ↑ ↓
(Level 3)
↓ ↓ ↓ ↑ ↓
In his book Achieving Results from Training, Robert Brinkerhoff presents a six-stage training
evaluation model, which adds two initial steps to the Kirkpatrick model—evaluation of the
needs and goals of the training design. Brinkerhoff contends that crucial information needs to
be gathered at these first two stages before the decision is made to implement a training
program. He examines in considerable detail the issues that need to be resolved at each level
before moving to the next one and offers a wide variety of data collection techniques,
guidelines, and criteria crucial to making sound decisions and ensuring that the training
program pays off.
Before undertaking an evaluation exercise at any level, however, Brinkerhoff underscores the
importance of clarifying the need and purpose of the evaluation; the type of information that
should be collected at each stage; the audience for whom the information is gathered; how the
reporting will be conducted; and the key decisions and judgments that need to be made based
on the data collected at each step of the process.
These six stages represent a sequence in which each step is linked to the preceding one, as
well as to the following step. The issues that may arise at any stage are directly linked to the
decisions made in the preceding one and have a direct impact on the outcome of the
following stage. He refers to the "training decision-making cycle," i.e., problems that surface
pertaining to any of the stages may necessitate reviewing the decisions made at the previous
stages, examining the reliability of the data, or even returning to Stage I.
The following diagram illustrates the decision-making cycle of the six-stage model (p.27):
Stage I
Evaluate needs
and goals
Stage VI Stage II
Evaluate Evaluate
payoff design
↑ ↓
Stage V Stage III
Evaluate usage Evaluate
and endurance operation
of learning
Stage IV
Evaluate
learning
This model shows the "recycling that takes place among the stages." For instance: if the
participants are not interested and motivated during the training (Stage III), is the design
appropriate (Stage II)? Is the training necessary (Stage I)?; or if the employees are not
applying the skills taught (Stage V), did they really learn them (Stage IV)? Are the new skills
still necessary for them (Stage I)?; or if trainers cannot agree on the appropriate design (Stage
II), is training the answer to the problem (Stage I)? (p.33)
Following is a summary of the six-stage model. As stated above, it is beyond the scope of
this section to provide specific mechanisms or tools needed to conduct an evaluation exercise.
The intent rather is to synthesize the salient concepts and definitions that each stage addresses
and examine the guidelines, criteria, and key issues that need to be resolved throughout the
process.
The data collected at this level are used to analyze and prioritize the needs, problems, and
weaknesses of the organization and establish what training goals are worth pursuing.
This analysis also provides crucial information to determine whether training is the solution to
the weaknesses identified.
There are several situations that may call for a training solution, such as performance deficits,
organizational changes, or management decisions. Since Stage I analysis will provide a
framework for establishing the value of the training and determine its potential payoff, it is
directly linked to Stage VI evaluation, which determines whether or not the training was
worthwhile.
The following examples illustrate the relationship between the performance gaps identified
(Stage I) and the benefits sought (Stage VI). (Adapted from p. 33)
_____________________________________________________________
Stage I Stage VI
Needs (Reasons) for Training Benefits (Criteria for Success)
_____________________________________________________________
These examples illustrate how organizational deficits are linked to corresponding criteria to
assess the results of the training. If consensus for Stage VI criteria cannot be reached, it is an
indication that either work at Stage I is not complete, or there is no need to do training.
In summmary, the purposes of Stage I evaluation are to assess, validate, and rank the needs
and problems; clarify the causes of the problems; distinguish between needs and wants; and
determine the potential value in meeting these needs.
Stage I seeks data that will "predict" whether on-job behavior can and should
be changed, whether specific SKA [skills, knowledge, attitudes] changes would
be sufficient for changed behavior, and whether SKA changes are achievable
through a training intervention (p.26).
This level assesses the appropriateness of the training design. It focuses on the issues that
must be considered after the decision is made to undertake a training activity, but before it is
implemented. At this stage, several designs may be proposed, and the strengths and
weaknesses of each assessed. The design finally adopted may represent a composite of the
best elements of several designs.
Careful analysis of the adequacy of the strategies and materials, as well as the training
methods and venues selected, will render a stronger and more effective design and allow the
process to move to the implementation stage. The inevitable weaknesses present in the design
will be revealed when it is actually put in operation, and the trainers will have to review it
and make the necessary adjustments. This is an example of what the author refers to as the
recycling process of the six-stage evaluation model.
Among the criteria suggested to guide the assessment of the training plan are:
Clarity and definition. Everyone involved in the training event—operating unit, customers,
and participants—must be able to readily understand the various components of the design.
This involves clear definition of the needs and goals to be addressed by the training; the
approaches and strategies developed; and the resources necessary to implement the programs.
Compatibility. The training format adopted and the materials selected must also reflect the
environment in which the training will take place, the cultural and ethnic make-up of the
participants, as well as their educational, professional, and social backgrounds.
Theoretical adequacy. The design must incorporate current research and sound theory related
to adult-learning practices.
Practicality and cost-effectiveness. The theory that supports the design might be excellent, but
if it requires unreasonable financial or human resources, it may not be a practical design. The
evaluation at this level should consider economic alternatives of implementing the training
without compromising the objectives.
Legality and ethics. The importance of considering this criterion at the design level cannot
be overemphasized, and the "criteria regarding ethics and legality are absolute and must not
be compromised." Trainers need to take into account and honor the needs, rights, and
expectations of the participants based on their customs and traditions, as well as ensure their
physical safety. (For USAID-sponsored training, this means that it must adhere to regulations
put forth in ADS 253.)
Stage II evaluation should carefully identify the objectives that a given design
will probably achieve, then compare these against the initial expectations to
assure that real and worthwhile needs are likely to be addressed (p.88).
Once the training design is deemed appropriate, this stage monitors the training activities and
gathers feedback on the reaction and level of satisfaction of the participants. It assesses the
discrepancy between what is actually taking place in the training and what was planned or
expected. To solve the problems encountered at this level, trainers may need to refer back to
the training design (Stage II) and make the necessary adjustments.
Interviewing. Whether the interviews with participants are structured or informal, they are a
useful technique because they allow the trainers to ask follow-up questions and obtain more
detailed information.
Key participant method. This method involves selecting trainees who, because of their
expertise or leadership qualities, are able to provide thoughtful comments and insights.
Observations. One trainer observes another and records participant reactions and behaviors. If
an observation form is developed, it will render useful quantitative data on the reaction of the
participants.
Trainee rating and reaction forms. Questionnaires and surveys may be administered at regular
intervals during the training to gauge the satisfaction level of the participants. But
because this is the most commonly used method to evaluate reaction to the training,
participants may not pay much attention to the forms and provide superficial comments.
Nonetheless, their reaction is important in order to proceed to the next stage.
... Stage III process is one of observing and assessing the program’s progress,
noticing discrepancies, making revisions, and trying it out again, then
reobserving and reassessing to see if progress is now acceptable. This is the
process that makes training work and move toward payoff (p.96).
Any training event, regardless of its scope or duration, aims to enhance the skill and
knowledge level of the participants. The extent to which this improvement has been achieved
is the measure of the effectiveness of the program. Stage IV determines the level of learning
and improvement that took place. If sufficient learning occurred, we can expect that it will be
applied in the workplace and results will be achieved.
The data gathered at this level are used to revise and refine the activities and strategies that
will ensure the desired transfer of learning. Brinkerhoff suggests the following uses for Stage
IV evaluation:
Gathering evidence that proves the effect of training—accountability. Trainers need to provide
evidence that the skill and knowledge level of the participants has improved.
Determine mastery of training results. This information is useful at three levels: it provides
feedback to the participants regarding their achievement, to the trainers regarding trainee
performance, and to the supervisors regarding the degree of skill mastery of their staff.
Looking for unintended results. If the unintended results are also undesirable, trainers need to
know this information to reenforce those areas in the program that produce desirable results.
Planning for Stage V follow up. Because Stage IV evaluation identifies weaknesses in the
achievement of skills and knowledge, it sets the framework for Stage V evaluation, which
assesses the application of these skills.
This level of evaluation indicates how well the trainees are using on the job the knowledge
and skills acquired. "It looks at actual performance, not ability to perform."
Stage V evaluation usually takes place at the workplace, which "represents the richest source
of data." Because transfer of training to the workplace does not take place exactly as planned,
evaluators should take into account the numerous steps and changes that occur from the
learning results phase (Stage IV) to the eventual application of these results. The purpose of
Stage V evaluation then is to record and analyze these steps and changes. Essentially, it
documents when, where, how well, and how often the training is being used; which skills are
and are not being used; and how long the effects of training have lasted.
The author once again cautions that it is important to clearly define the explicit purposes and
uses of a Stage V evaluation before designing it. Below are some guidelines:
Revising training. This level determines the effective and ineffective ways in which the new
knowledge is being applied. It signals ways of improving the program to achieve transfer of
skills and knowledge at the level expected. Or it may be decided that other types of targeted
interventions at the workplace, such as providing peer support or greater guidance, are all that
is necessary.
Planning ahead for Stage VI evaluation. The benefits that training brings to the organization
cannot be assessed without an accurate understanding of how the new skills are actually being
applied. Stage V documents instances of appropriate application of the new skills, which
forms the basis for Stage VI inquiry.
Documenting and accounting for transfer of training. Provides crucial information to potential
participants as to what they can expect from training in terms of actual results in the
workplace. By documenting the before and after behaviors in the workplace, an evaluator also
develops a database for Stage VI evaluation.
It should be noted ... that the benefits to the organization derive not from what
was learned but from what actually gets used. This provides the basic reason
for being of Stage V evaluation: Training is not done for the sake of learning
alone but for the sake of providing value to the organization through improved
job performance (p.133).
By the time the evaluation process reaches this level, we can assume that the training was
successful, the participants are applying what they have learned, and an evaluator has
identified and recorded the extent to which changes have taken place in the workplace. The
aim of Stage VI evaluation then is to assess the value that these changes have brought to the
organization and whether this value was worth the effort given the time and resources
expended.
If we consider the six-stage model as a cycle, the data collected at the final stage are used to
assess whether the training results have resolved satisfactorily the needs of the organization.
The fourth point above addresses this issue, which leads directly to the needs and goals that
were identified at Stage I. To determine whether the training has paid off, it is crucial to
show at this level of evaluation the link between Stage VI and Stage I. The answers derived
from this analysis will guide the decision to either replicate or abandon future training
programs in the same area.
Sometimes the value of the benefits may be assessed in monetary terms or cost savings, and
therefore, can be easily measured. But in cases in which it is not feasible to measure the
value of the improvement in financial terms—such as clean air or improved teamwork and
morale—we have to use qualitative methods. And while these methods may be more
subjective, the improvement should not be considered less valuable or beneficial.
Consider a broad range of training impact variables. This involves documenting benefits that
may not be directly linked to the needs but are nonetheless beneficial to the organization.
Look for specific training applications. The point here is that a list of specific applications
—as opposed to general statements of impact—is of greater use when making decisions about
the value of future programs.
Refer to specific data from preceding evaluation stages. This guideline refers to the recycling
concept. Stage VI calls for attributing a value—monetary or otherwise—to the results. Thus,
the data obtained at Stages II and III, the design and implementation levels, are used to
estimate the cost of the training. Stage I data, as mentioned above, provide a basis for
determining the value. The crucial point that bears repeating is that Stage VI evaluation
should not be undertaken without reference to the data obtained at the previous levels.
The six-stage model represents an exhaustive evaluation exercise that is not always feasible to
achieve based on competing deadlines and reduced budgets. Nonetheless, Brinkerhoff’s advice
is to consider each of the stages, if only briefly, to guide and bolster the training function and
educate the customers as to its benefits.
The HERNS (Human and Educational Resources Network Support) project carried out by
Aguirre International provides assistance to USAID Missions with the design, development,
and evaluation of training activities. Through this project, HERNS specialists have developed
performance monitoring systems and training impact evaluations for several Missions.
The model presented below was designed in 1995 for USAID/Egypt’s integrated monitoring
and evaluation (M&E) system. It links the sequence of events of the training cycle with key
M&E activities. Source: Development Training II Project, M&E System, USAID/Egypt.
Strategically Plan and Acquire skills, Apply SKA/ Contribute to the Contribute to the
Implement Training knowledge, and Achieve training activity Results Package (RP) Strategic Objective
attitudes (SKA) objective objective
→ → → → →
Why is To judge the performance To measure the To measure the improved To measure progress To measure
this being of SO/RP teams, training increased capacity of performance of trainees as toward improved progress toward
monitored? units, contractors, and trainees as a necessary related to key institutional institutional the SO.
providers in ensuring precondition to performance requirements. performance as a key
relevant and quality improved performance intermediate result
training programs. in the workplace. leading to an SO.
What indicators Generic: Degree of Generic: Degree of Generic: Percentage of To be determined by To be determined
will be used to collaboration of all change in SKA trainees applying elements RP team. by SO team.
measure it? stakeholders in planning, (pre/post). of training; percentage of
including action planning Specific: TBD by action plans executed;
that links training to SOs. RP team. percentage of trainees with
Degree of trainees’ increased responsibilities.
satisfaction with training. Specific: TBD by
Specific: TBD by RP team. RP/SO team.
How will it be Self assessment of Training providers’ Trainee questionnaires. To be determined by To be determined
measured? collaboration through focus assessments of trainees. Interviews/focus groups RP team. by SO team.
groups with stakeholders. Exit interviews and with supervisors. On-site
Trainee satisfaction questionnaires. observations.
questionnaires.
When will it be Focus groups: annually Upon completion of Within six months of To be determined by To be determined
measured? Questionnaires: upon training. completion of training. RP team. by SO team.
completion of training.
Who will be Focus group: GTD Training provider and Training unit with RP team. SO team.
responsible? contractor training unit. RP team.
Questionnaire: training unit
Source: Human Capacity Development Activity Design. A HERNS Report. USAID/El Salvador, January 1997.
Change Agents
2
Caribbean and Latin America Scholarship Program, the primary source of funding for participant training
programs in Latin America from 1985-1995.
One of the core reengineering values—managing for results—calls for establishing clearly defined
strategic objectives and developing performance indicators to track and measure progress.
"Performance indicators are measures that describe how well a program is achieving its objectives."
(TIPS #6, Selecting Performance Indicators). They are the direct measure of the intermediate result
and, consequently, are indispensible tools in determining the extent to which changes or
improvements have occurred. Appropriate and carefully articulated indicators provide the
mechanism to monitor progress, measure achievement, and collect, analyze, and report results.
If we view training as a tool that contributes to the achievement of a strategic goal, then training
indicators must be derived from the technical intermediate results to which the training activity has
been linked. Training indicators will allow SO teams to establish the relationship between training
and the results expected and determine the value of training as a tool to achieve an objective.
Indicators specify the type of data that should be collected, along with the unit of measurement to
be used. Only those indicators that can be measured at regular intervals, given the time and
resources available, should be selected. The criteria presented below define the characteristics of
quality indicators and the application of these criteria to training indicators.3
Direct - A direct indicator measures the result it is intended to measure. The indicator should not be
stated at a higher or lower level than the result being measured.
For training, this criterion means that the indicator measures only one specific job improvement.
Objective - The indicator is precise, clear, and understood by everyone. It measures only one
change at a time and there is no ambiguity as to the type of data that needs to be collected.
For objective training indicators, there is no doubt among all the groups involved—SO teams,
participants and supervisors—as to what job improvement is being measured.
Adequate - All the indicators selected should together measure the result adequately. The actual
number of indicators needed depends on the nature of the result, the level of resources available to
monitor performance, and the amount of information needed to make informed decisions.
Adequate training indicators determine improvement in job or organizational performance that can
be traced to the training.
3
Adapted from TIPS #6, Selecting Performance Indicators,1996
Practical - The data can be collected with sufficient frequency and in a cost-effective manner.
Reengineering guidance states that between 3 and 10 percent of total program resources should be
allocated for performance monitoring and evaluation.
For training, practical means that SO teams or supervisors can administer easy-to-use monitoring
tools at regular intervals and at a reasonable cost.
Reliable - Refers to the reliability and validity of the data, i.e., if the procedures used to collect and
analyze the data were duplicated, the results would remain constant.
Reliable training indicators use valid surveys or questionnaires and the information collected can be
easily verified.
Given the multiplicity of training applications and approaches, it is advisable first to develop
a series of indicators and then select from that list the ones that will best measure the results.
When undertaking this activity, however, one should keep in mind the admonition provided
by Administrator Atwood in a recent communication:
These tools... should not be used in a rigid, mechanistic manner, stifling field
creativity or ignoring the reality that performance must be interpreted differently in
different settings. They should promote our knowledge of development and our ability
to assess whether we are making progress, not limit it.
(USAID General Notice, 2/7/97).
With these issues in mind, the next two tables were designed as tools to assist SO teams in
developing useful and appropriate training indicators. Table IV summarizes the criteria stated above
for assessing the validity of generic indicators and the application of these criteria to training
indicators. Table V provides examples of good and poor indicators judged against the established
set of criteria and defined according to Kirkpatrick’s evaluation levels. It is important to underscore,
however, that Level III and IV indicators require more rigorous evaluation methods and that
attribution of the improvement to the training experience needs to be specified.
Criteria
Direct Objective Adequate Quantitative Practical Reliable
Qualitative
Explanation Measures the It is precise, It measures the Indicator is Data can be Uses valid methods
of criteria for result it is clear, and result needed to quantitative collected in a of data collection.
generic indicators → intended to widely make informed (numerical) timely and cost-
measure. understood. decisions. or effective fashion.
qualitative
(descriptive)
The table on the next page provides examples of good and poor training indicators judged against the above set of criteria.
Criteria
Kirkpatrick Quantitative
Indicators Evaluation Direct Objective Adequate Qualitative Practical Reliable
Levels
Teachers are using Level III Yes Yes States improvement. Quantitative. Yes Yes
locally relevant Indicate other Should state
curriculum. contributing factors. % of teachers.
Five ADR mechanisms Level III Yes Yes States improvement. Quantitative. Yes Yes
created. Indicate other
contributing factors.
New performance Level IV Yes Yes States improvement. Quantitative. Yes Yes
appraisal systems Indicate other
established. contributing factors.
80% reduction in the Level III Yes Yes States improvement. Quantitative. Yes Yes
amount of time it Indicate other
takes to issue a contributing factors.
license.
Continued
Criteria
Indicators Kirkpatrick Quantitative
Evaluation Direct Objective Adequate Qualitative Practical Reliable
Levels
Environmental impact Level III Yes Yes States improvement. Quantitative Yes Yes
statements carried out Indicate other
in 75% of projects. contributing factors.
75% of projects are Level IV Yes Yes States improvement. Quantitative Yes Yes
modified to comply Indicate other
with environmental contributing factors.
impact statements.
75% of employees Level III Several elements May involve States improvement. Qualitative Difficult to Difficult to
report improved comprise improved improved Indicate other (indicates administer verify
morale. morale. communication, contributing factors. change in regular data.
teamwork, or attitude) evaluation tools.
less absenteeism.
Increased number Level IV No. Should be Exact change/ Linkage to training Quantitative No No. Data
of child survival broken down: improvement is will have to be percentage is not
practices used. -rate of ORT uses not specified. demonstrated. needs to be verified
-percentage of Improvement is specified. easily.
children vaccinated too broad.
-number of cases of
diarrhea reported
The definition adopted for training impact states that we will measure improvements in job
performance that can be directly attributed to training. Oftentimes, we observe that significant
changes have taken place following a training event, but because training is only one of
several inputs that influences results, we cannot attribute 100 percent of the improvement to
the training experience. When reporting results, training specialists have found it especially
challenging to isolate improvements from other nontraining variables, particularly since most
evaluations are not designed to do so.
If the objective of the program was to improve performance in a specific area, and
improvement can be recorded, Kirkpatrick would suggest that we should be satisfied with
evidence instead of proof. (See Kirkpatrick Level IV guidelines). Other evaluation specialists,
however, have proposed ways of isolating the effects of training by using trends analyses,
control groups, or forecasting. While these methods present persuasive arguments for isolating
performance improvements, they either require an unreasonable level of effort and resources
or would not be feasible to conduct in a development context. (However, see Witness Schools
under the case study for education, USAID/Morocco, for an example of a control group.)
The method described below—developed by Jack Phillips—is presented here because of its
practicality and applicability. It represents a cost- and time-saving technique that is applied
to Level IV evaluation. The data can be easily gathered, interpreted, and reported by the
supervisors or the participants themselves. By developing a user-friendly form that surveys
the percentage of the improvement derived from training, the author suggests factoring in a
confidence level. For instance, if a participant estimates that 80 percent of an improvement is
due to training and is 90 percent confident about that estimate, multiply 80% x 90% = 72%,
which indicates the overall confidence level. Multiply this figure by the degree of the
improvement in order to isolate the portion attributable to training.
It would not be a practical exercise, however, to isolate the effects of training without having
collected data at the various levels of evaluation. This process begins once the participants
have had enough time to apply the new skills in the workplace and sufficient information can
be gathered on the results and improvements achieved.
Table VI provides a tool for isolating the effect of training by factoring in a confidence level
and indicating other variables that may have contributed to the improvement.
Indicate here
response
to above
issues.
To increase the reliability of this approach, management can review and confirm
participants’ estimations. Supervisors may be aware of other factors not related to training
that caused the improvement and their estimates may be combined with those of the
participants. Likewise, depending on the circumstances, estimates may be obtained from
customers or subordinates. Granted that we are dealing with estimates, which present an
undeniable level of subjectivity, however, Phillips would argue that the estimates come
from a "credible source, the people who actually produce the improvement." ("Was it the
Training"?, Training and Development, March 1996.)
The approach proposed in this section involves thinking of the improvements gained through
training in financial terms. There are numerous instances in which assigning a monetary value
to the improvement would not be a practical nor feasible exercise. However, if the
improvement results in less time spent in accomplishing a task, or fewer errors or accidents,
or reduced turnover, then financial benefits can be calculated in terms of staff time, fewer
fees paid by the organization, or increased production.
The two examples presented here illustrate how to calculate the value of the improvement and
how to calculate savings in overtime.
To convert the value of the improvement into monetary terms, concentrate on one indicator
at a time. For instance, if the improvement consists of reducing the amount of time it takes to
issue licenses, first determine the baseline, i.e., the number of licenses issued per week before
training times the amount earned by the employee per week. Show the difference in the
number of licenses issued after training and calculate the percentage of the improvement.
Multiply the employee’s salary by the percentage of the improvement to obtain the value of
the improvement. (See Table VII)
To calculate savings in overtime, first indicate the target: 50 percent reduction in overtime in
a six-month period. Then determine the baseline, i.e., the amount paid in overtime prior to
training. Establish the employee’s hourly salary and multiply it by the number of overtime
hours worked per month to arrive at the monthly cost. Follow the same procedure for the six
month period being measured after training and calculate the savings. When presenting these
results, a comparison should be made between the target established and the results achieved.
(See Table VIII)
Converting results into monetary terms provides an additional way of measuring the benefits
of training to an organization. As stated in preceding sections, training begins as a response to
a need or problem in an organization. By calculating the value of the improvements, we bring
training full circle to the needs and problems it was meant to address. In times of reduced
funding, SO teams can use this data to decide which training activities should be funded, how
to manage resources more efficiently, or to justify increased expenditures on training.
14 $175
___________________________________________________
20 $175
___________________________________________________
6 42% $73.5
(6 ÷ 14) ($175 x 42%)
________________________________________________________________________________________
_______________________________________________________________________________________
($600÷$900)
The previous discussion on evaluation models, indicators, and measurement tools underscores
that the monitoring and evaluation functions are not isolated academic tasks. They are integral
and essential components of training activities, beginning with the needs assessment. The
benefits derived from monitoring training progress and measuring results are significant. The
process allows us to account for the resources expended and justify the investment made;
provide a mechanism for regular revision and improvement of designs; and demonstrate that
carefully planned programs constitute an effective tool for achieving results.
When analyzing and reporting training results, we also need to gauge the level of
commitment that the trainees and their supervisors have to the training, as well as the climate
that the participants will find in the workplace upon return. The extent to which trainees are
given the opportunity to apply the new skills and the level of encouragement and support they
receive are important factors to consider before deciding whether or not the training has
produced the expected results or if the indicators were met.
Before deciding on the most appropriate evaluation model at any level, it is first important to
clarify the need for an evaluation as well as its audience. Agree upon the types of questions
that will best elicit the responses desired; carefully decide on the most appropriate evaluation
tool; and determine who in the organization should be involved in the evaluation surveys or
interviews.
Given the rapid organizational changes that some institutions experience, conducting only one
report or evaluation will probably render it obsolete in a short period of time. A more
credible approach is to develop a program that would include several monitoring and
measurement mechanisms at different levels over regular intervals, using a variety of data
collection methods and sources. The more approaches we use, the greater the level of
reliability and credence given to the findings.
This section features examples of training programs designed and carried out to support Mission
strategic objectives. They are offered as case studies to illustrate the variety of approaches and
creative applications that SO teams have used to implement the training function.
The following criteria guided the selection of the case studies: the objectives of the training show
direct linkage to the intermediate result(s); training was offered to a critical mass of carefully
selected participants who are in positions to effect change; involvement from the outset and at all
stages of the entities affected by the results sought; follow-on activities; and results that show
achievement of the intermediate result.
While each case study is presented in the format that best suits the training program described,
the overall areas covered in each study include: a background piece describing the situation in the
country, followed by an explanation of the training model, including training objectives and
selection criteria, the monitoring and measurement tools developed, and a summary of the results
achieved.
The case studies were prepared with the assistance of the respective Mission staff and/or training
contractors who provided the information and data reported. Their interest, assistance, and
collaboration in this effort have been invaluable. They enthusiastically shared their training
designs and plans, provided thoughtful insights and observations, maintained regular
communication with the author, and reviewed draft copies.
USAID\Central Asia
Background
USAID has a regional Central Asia office in Almaty, Kazakhstan with one of its satellite
offices located in Tajikistan. The training program described in this case study was designed
for Tajik participants and represents USAID/Central Asia’s focus on economic restructuring
as the foundation for developing the private sector.
Following its independence from the Soviet Union in 1991, Tajikistan faced grave economic
and social problems. Serious political and ethnic differences among the various factions led to
civil war; numerous industries closed; unemployment and inflation were high; and basic
commodities, such as food, transportation, or public utilities became dangerously scarce.
Moreover, the human resource skill base was undermined by the large emigration of ethnic
Russians and other non-Tajik groups.
Faced with this situation, the government of Tajikistan sought to restructure its economy
through the privatization of targeted industries. The thrust was to stimulate economic growth
by facilitating the transition from a centrally controlled to a market-based economy.
The USAID/CAR economic growth SO team assists the Tajik government as it defines and
articulates its role in a market-led economy and formulates economic policies that promote
growth and stability.
Training Model
The objectives for each of the training programs, along with the groups of government
officials who participated, are indicated below. Because sections I and III were designed for
senior-level officials, the objectives were the same.
Economic Restructuring I
Economic Restructuring II
Results
An essential component of the training design was an explicit description of the intended
results, which spelled out how participants would be able to apply the knowledge and skills
acquired during their training. It also gauged how the successful application of what was
learned supported the achievement of the relevant USAID strategic objective.
Individual programs were evaluated using a variety of qualitative and quantitative methods.
These included arrival and exit questionnaires, weekly interviews with groups and trainers to
monitor progress, and site visits to selected programs. In addition, debriefings were conducted
with returned participants, along with follow-on questionnaires. The following success stories
from participants who attended the economic restructuring programs reveal the types of bold
and innovative privatization and business measures undertaken that directly support
achievement of the intermediate result:
One participant planted a fruit orchard with a potential annual output of 1,000
tons. The irrigation systems installed for the orchard also provide drinking
water for the population of the nearby valley.
Follow-on questionnaires, typically administered after participants have been back from
training for at least six months, provide additional quantitative data and represent key
indicators of project success. The following statistics were reported in the September 1996
issue of the NIS Highlights, the monthly newsletter of the NIS Exchanges and Training
Project:
95 percent reported that they have used the knowledge gained during training
to effect policy decisions at the organizational level
88 percent reported that they have effected policy decisions that support the
further development of a free market economy
80 percent reported that they have effected policy decisions that support the
further development of a democratic system of government
97 percent reported that they have shared the ideas and techniques acquired in
training with their colleagues and supervisors
Multiplier Effect
Training programs customarily have multiplier effects, and participants often engage in a
variety of experiences—lectures, seminars, interviews, and writings—that they share with
fellow professionals.
One multiplier effect activity pertaining to this training program, however, stands out: Five
Tajik senior government officials from the President’s Board on Economic Reform and from
the Strategic Research Center who had participated in training traveled to Almaty to observe
economic and privatization reforms in Kazakhstan. Upon their return, they organized a series
of five-day seminars, which took place concurrently in the Leninabad and Khanton oblasts.
The topics included: infrastructure of a market economy, developing credit mechanisms, state
support to small entrepreneurs, investment development, and economic restructuring.
Hundreds of local government officials, state managers, and small- and medium-size
businessmen attended these sessions, held in a variety of settings, such as government offices,
universities, factories, plants, and farms.
This event accomplished two objectives: The instructors helped institutionalize the training
they received by training others; and a wide range of professionals gained a comparative view
of how economic restructuring has been implemented in the United States and in Kazakhstan.
USAID\Bolivia
Background
Since 1995, USAID/Bolivia’s training and follow-on activities have been designed and carried
out in strict relationship to the achievement of the Mission’s strategic objectives.
USAID/Bolivia’s democracy SO team provides technical assistance to several key institutions
to help them develop more efficient, accessible, and transparent procedures. The SO team
determined that a critical mass of administration of justice (AOJ) professionals required
targeted training in order to effectively carry out important reforms in the sector.
Training Model
Through 1995, most of the training in the sector took place in-country. When results
frameworks and intermediate results were developed, it was determined that exposure to the
U.S. justice system was critical to acquaint Bolivian professionals with different AOJ
mechanisms, procedures, and techniques. The training activity described in this case study
was linked to IR 1, and took place between September 1996 and March 1997.
A generic training model applied to all training activities was developed for the Bolivian
Peace Scholarship Program under CLASP (Caribbean and Latin America Scholarship
Program, USAID’s major funding source for training activities conducted in the region).
1. Training objectives are defined with a clear focus on results. SO teams and
partners work together to define the training program and to select participants.
Results
The Bolivian criminal process calls for the use of oral prosecutorial
mechanisms, but judges and prosecutors are not familiar with their use.
Following training, one judge resolved 25 cases in only one month using these
techniques.
The following activities took place within two to four months after training:
It should be noted that returned participants have achieved concrete results and implemented
several multiplier-effect activities in less than six months following their training.
The Mission anticipates that in upcoming months, once the Criminal Procedures Code is
passed, participants will increasingly apply oral prosecutorial system procedures. The
democracy SO team maintains regular contact with the stakeholders and partners who
sponsored the ADR and AOJ trainees to promote, encourage, and monitor the application of
the skills acquired during training.
USAID/El Salvador
Background
El Salvador’s centralized health service delivery system concentrates the bulk of its services
in the San Salvador metropolitan area, where the majority of physicians from the public and
private sectors practice medicine. Thus, the rural population lives in areas that are relatively
inaccessible to medical services.
Traditionally, health service providers in the public and private sectors (particularly NGOs)
have not worked together and have tended to mistrust one another. A fundamental problem of
the public health system has been its emphasis on curative rather than preventive medicine.
Many NGOs, on the other hand, have well established community outreach and public
education programs. This, combined with the fact that NGO personnel live in the
communities in which they serve, makes them powerful proponents and instruments of
preventive health care. The Ministry of Public Health and Social Assistance (MSPAS) staff
has not taken full advantage of what these NGOs have to offer to El Salvador in terms of
reform of the national health care service delivery system.
Although the majority of managers and supervisors working in the public health care system
are qualified and educated, the government realized that to implement system reforms it must
be able to count on a critical mass of managers who are well versed in new concepts and
techniques that will support a complete modification of the system.
In consultation with MSPAS, the Salvadoran Social Security Institute (ISSS), health NGOs,
and training specialists, a comprehensive U.S. training program was designed for 110
participants in five separate groups. The participants were drawn from the MSPAS, ISSS, and
NGOs; the majority of them held mid-level management positions—regional supervisors,
financial managers, health unit administrators, division chiefs, project coordinators, and head
nurses. The thrust of the program was to provide participants with the technical skills
necessary to design and implement programs to improve the delivery of health services
throughout the country. Trainees were exposed to different models of the administration of
health services, successful and unsuccessful reform efforts, and effective coordination
mechanisms carried out by the public and private sectors.
The following eight steps indicate the sequence of events identified by the Mission that need
to take place during the training process:
- Strategic planning
- Needs assessment
- Specific purpose of training
- Training design
- Selection of participants
- Training
- Follow-on
- Monitoring and evaluation
The chart on the next page illustrates the interconnection of the eight steps and their
corresponding activities.
Note: Chart was translated from Spanish by the author: Ocho pasos en los que las actividades
de capacitación deben cambiar para responder a la reingeniería.
Strategic Planning
Involvement of stakeholders
Strengths, weaknesses, and limitations
Options
Goals, results
Needs Assessment Monitoring and Evaluation
Performance assessment Performance indicators
Gaps in skills, knowledge, and are determined during the
attitudes (SKA) strategic planning phase
Continuous monitoring
↓ ↑
Selection of Participants
Directed to target population
Geared to several hierarchical levels
Development of change agents
Critical mass
All of these steps require shared responsibility and mutual collaboration between the training unit and the strategic objective teams.
Training is more likely to bring desired results if the follow-on component is planned
at the design stage.
Lessons learned can be capitalized upon and applied to new initiatives with
modifications, improvements, expansions, and creativity.
Specific examples of changes and improvements to public health care services as a result of
the training program include:
A doctor at a medical unit reports that the hospital’s administrators now rely heavily
on feedback from both employees and clients to monitor and evaluate service quality.
The hospital has installed suggestion boxes for client evaluation.
One of the MSPAS doctors has set up a visiting nurses system through his newly
established close working relationships with community leaders and local NGOs.
The director of an NGO considers that the greatest achievement of the training
is that staff have learned to accomplish more through working together. The
NGO has since been working with ISSS to review patients and make
recommendations for medical care. This had never been the case before.
The ministry has recognized the potential of a number of former training participants by
promoting them within the ministry. The MSPAS is now ready to proceed with modernization,
and the training program has supported the ministry’s plan for modernization.
For the ISSS, which provides health care services to workers and their dependents, the most
significant improvement has been in the decentralization of the decision-making and problem-
solving processes. Unit personnel are now much more conscious of what they have, what they
need, and what they can obtain. On the service provision side, the biggest change has been the
focus on client satisfaction. ISSS participants have put together a complete training package for
employees in all units, which includes sessions on leadership and Total Quality Management.
They have been conducting training for several months and are using videos developed
specifically for the training.
A new vision exists for the health care service delivery system in El Salvador. The process of
decentralization had an impact on not only administrative and financial procedures, but on treat-
ment and service strategies as well. The linchpins of the improved public health care system are:
a. Closer linkages with community-based NGOs mean closer ties to the communities
themselves, especially those in remote rural areas. This means faster service access
to these communities, as well as public education programs and a greater
acceptance on the part of community members.
b. Public sector agencies can take full advantage of NGO clinics and other services,
given their focus on prevention. Better preventive health programs ultimately mean
that fewer seriously ill patients need to go to public hospitals, which also results in
an economic savings for the public sector.
USAID/El Salvador has capitalized on lessons learned from CLASP (Caribbean and Latin
American Scholarship Program) training projects and has systematically built upon the successes
achieved. For the new Human Capacity Development (HCD) activity due to start in FY 1998, all
of the elements of success according to the Mission’s strategic objectives/results packages will be
included.
For performance and impact monitoring of the new HCD activity, the Mission has established
three main training indicators as follow:
This indicator measures the impact of training in the workplace by evaluating whether
trainees apply elements of their training. Information is obtained from trainee self-
evaluation as well as from selected supervisor evaluations. Baseline data is to be collected
in 1997.
This indicator measures the impact of training in the workplace by examining whether
trainees assume greater responsibilities. Only those trainees with increased responsibilities
related to their training will be considered. Information is obtained by trainee self-
evaluation and supervisor evaluations. Baseline data is to be collected in 1997.
This indicator measures the impact of training in the workplace by examining whether
trainees successfully complete their action contracts. These action contracts are agreed
upon by both the individual trainee and his or her institution and involve the
implementation of measurable activities. Baseline data to be collected in 1997.
Data for each indicator will be collected on a rural/national and male/female/total basis.
USAID/Morocco
Background
In rural Morocco, only 22.5 percent of girls enroll in primary school and four out of ten girls
complete the sixth year of the primary cycle. Many rural schools have multigrade classes,
while most primary school teachers do not have the pedagogical background and practical
skills necessary to teach in these settings. The coursework offered at the teachers’ training
colleges does not include multigrade teaching techniques; nor do student teachers acquire the
skills necessary for adapting the curriculum to local needs or making it gender sensitive.
Faced with this situation, the Ministry of National Education (MNE) has developed the Rural
School Development Program (DSMR) to improve rural primary education in Morocco. In
partnership with parents, students, communities, local authorities, ministries, and NGOs, the
DSMR has set out to revolutionize rural education by improving the quality and relevance of
primary education and integrating primary schools into the communities. USAID, along with
other donors,4 is assisting the ministry in implementing this strategy, which will target the 13
most disadvantaged provinces in the country, as identified by the World Bank and the
government.
USAID’s first initiative in support of the MNE strategy is a training activity that consists of
testing new teaching interventions in 20 schools located in five of the 13 pilot provinces.
The training activity described here was developed in support of the first IR and, more
specifically, the two mentioned below:
4
Other donors include UNICEF, UNDP, the World Bank, and the French government.
Under USAID’s Training for Development project, an ambitious training plan was designed to
improve the teaching methodology in rural areas and to make the school system more
responsive to the needs of the regions. A series of in-country training interventions have been
implemented since the beginning of the current school year and will continue to take place
over the next one and a half years (1997 to mid-1998). The primary focus is to provide
educators with the necessary skills to develop effective teaching objectives, adapt locally
relevant and gender-sensitive curriculum, and manage multigrade classroom settings.
The core training group consists of primary school teachers, inspectors, school directors, and
faculty at the Teacher Training College, faculty who work in the pilot regions, as well as
ministry staff.
The five major components of the training strategy are outlined below:
Based on a joint effort involving MNE and USAID staff and training specialists, the training
needs of primary school teachers, inspectors, and directors were assessed. A plan was also
designed to determine what skills are needed to enable rural primary schools to offer a
relevant and participatory curriculum and what performance gaps prevent this from happening.
The following table illustrates the human resource constraints and performance gaps identified
under each of the two lower intermediate results.
Multigrade, gender-sensitive, · lack of baseline data to · pilot school teachers and · lack of awareness among
locally relevant curriculum measure how much curriculum professors in multigrade community leaders, parents and
developed adaptation has been classroom workshops 3/97 urban teachers about the
accomplished importance of recruiting and
· management skills workshop keeping girls in school
· lack of parental input in focus for teachers in each of the pilot
groups to discuss priority needs regions 4/97 · lack of female teachers and
in curriculum adaptation administrators from the local or
· curriculum adaptation in the urban areas
· lack of gender-sensitive pilot regions 5/97
materials, ethics, and pedagogy · lack of research skills for
in teaching colleges · teacher conferences 7/97 ongoing quantitative and
_________________________ __________________________ qualitative data collection
· curriculum adaptation for
Cadre of competent educators · teachers’ lack of experience in central team teachers in Rabat · lack of management skills for
developed student-oriented classrooms 9/97 ongoing monitoring to
motivate, inform, practice, and
· teachers’ lack of experience in · multigrade classroom apply workshop experiences
adapting classroom progress techniques for teachers in the
based on student abilities pilot schools 10/97 · lack of a school philosophy
that reflects different realities
· teachers’ attitude and lack of · curriculum adaptation for and takes into consideration
enthusiasm for working in rural teachers in the pilot schools introduction of students to a
settings 11/97 second language
The following training skills and teaching techniques were identified as necessary to lay the
groundwork for increased responsiveness to girls’ educational needs:
· Gender awareness
Qualitative and quantitative indicators were developed to measure the skills, knowledge, and
attitudes of the participants, as well as the number of primary schools offering an improved
multigrade curriculum.
4. Witness Schools
In each of the five pilot areas, a witness school was set up to serve as a control group. Staff
at this school will not be involved in the training, nor will the school receive any assistance.
Statistics will be gathered from these schools to compare and analyze against the performance
of the pilot schools.
Annual targets have been established to determine enrollment and retention rates. USAID will
acquire the Education Automated Statistical Information System Toolkit (ED*ASSIST)
to collect, analyze, and report data. ED*ASSIST is an integrated set of tools designed to assist
ministries of education in planning and implementing systems used to collect educational
statistics in a timely, efficient, and reliable manner.
USAID/Namibia
Background
At independence, in March 1990, the new government of the Republic of Namibia inherited a
legacy of apartheid policies. Virtually all the country’s natural resources and most of its social
services had been directed primarily to a five percent minority of the most advantaged sector,
while the needs of the majority of the population were largely neglected. This created a dual
economy in the classical colonial mode with wide disparities in income and resource
allocations. Seven years after independence, Namibia continues to struggle to overcome this
economic and social heritage.
Over the past five years, approximately 70 percent of USAID/Namibia’s resources have been
invested in education and training. The goal of USAID’s assistance program is "the
strengthening of Namibia’s new democracy through the social, economic, and political
empowerment of Namibians historically disadvantaged by apartheid." (Results Review, 1997)
In keeping with the strategy to use PVOs and local NGOs to address the development needs
in Namibia, USAID initiated in FY 1992 a five-year NGO capacity building program. This
project, entitled Reaching Out with Education to Adults in Development (READ), was
designed to provide a combination of grants, training, and technical assistance to NGOs to
increase their capacity to deliver services and education to historically disadvantaged adults.
This case study focuses on the training component of this integrated support effort.
USAID/Namibia SOs address the need to develop long-neglected human resources. One SO
in particular places emphasis on fostering and strengthening the human and institutional
capacity of local NGOs engaged in adult training and/or civic advocacy across a wide range
of sectors. While the READ project addresses two of the Mission’s four SOs, its training
component falls primarily under the one listed below:
The USAID strategic objective for increasing the skills of NGO personnel reads:
In the last three years, the READ project has provided training to 400-plus participants
through a combination of workshop series, individual sectoral workshops, conferences, and
seminars. Early evaluations indicated that training impact was greatest in the areas where
participants had the opportunity to acquire targeted skills, apply these skills during field
assignments in their organizations, and return to share their experiences. Thus, the core of the
overall training design lies within three separate workshop series designed to increase both the
technical skills and professional qualifications of NGO personnel, as well as enhance their
ability to transfer these new skills to others. Participants were selected from the staff of
approximately 40 NGOs and two government ministries. Most training programs were
designed and cofacilitated with NGO input. In the case of the training of trainers (ToT) series,
building institutional capacity within NGOs to implement these workshops in the future has
been a central part of the overall implementation strategy.
Participants: Health workers and trainers of NGOs that are working in the
field of HIV/AIDS education.
Upon completion of the core ToT, participants’ mastery of technical skills and leadership
qualities are assessed based on the following areas of expertise: knowledge of training
theories, facilitation skills, curriculum development skills, materials development skills,
analytical skills, knowledge of training content, training implementation and management
skills, communication and interpersonal skills, needs assessment, and monitoring and
evaluation skills. The ability of participants to demonstrate and apply these skills qualifies
them as Certified Participatory Trainers. HIV/AIDS Trainer Certification is based on similar
Participants: Training staff of NGOs who have participated in a workshop series and
expressed an interest in cofacilitating training for other NGO trainers.
In addition to the above, the READ project has actively supported the establishment of a
National Trainers Network for Namibia. This network will help maintain, expand, and build
on connections established during training between individuals and organizations involved in
training in the country. Also, to help the Ministry of Education deal with nonformal and
participatory approaches to education and to enable them to interact with NGO efforts in the
country, the READ project sponsored four staff of the Directorate of Adult Basic Education to
participate in the ToT series, and an additional five staff to attend a Master’s degree program
at the Center for International Education, University of Massachusetts.
Table X Mechanisms Used to Monitor and Evaluate the Effectiveness and Impact of
the Training Program
Computer-Based Implementation
Tool/Activity Purpose Information on: Time Table
ToT Needs To collect background/baseline MS Access Beginning of
Assessment information on needs of Database training cycle
Questionnaire potential training participants.
Participant To track progress of training Lotus 123 ToT 1 (pre &
Assessment: ToT Self through formative evaluation spreadsheet and post) & ToT 4
Assessment Forms, mechanisms and self-assessment MS Access
HIV/AIDS Training tailored to the training topics Database
Appraisal Form, and objectives
Daily Evaluation/
Steering Committee
session designs
Training Activity To document relevant Lotus 123; End of training
Reports information on Participation in Training Activity activity
Training Activities conducted Spreadsheet
under READ sponsorship
Trainer Skills To document and monitor skill MSWord list and Beginning
Inventories: Master levels required for trainer/ forms of/during/end
Participatory Trainer, participant certification training program
Certified Participatory
Trainer, (HIV/AIDS
Trainer)
Training Impact: To document the impact of MSWord merge To be completed
Trainer Profiles participatory training on File + MSAccess and discussed
participants and their NGO Database within one
The Trainer's Net clients month of
(newsletter). Articles, conclusion
list of participants, of training
shifts in
responsibility, and
other mechanisms
List of READ Project To monitor READ-initiated MS Word list Upon publication
Training Materials contributions to participatory printing of
Namibian training literature. manuals
Impact Assessment To identify the impact of MS Word files Yearly
Matrix project inputs—training,
technical assistance and
subgrants—at the participant,
organization, and clients of
organization levels.
Ten of the 37 individuals who participated in the core ToT series have received official
promotions. Of the 76 participants in both ToTs, the majority report an increase in job
responsibilities and productivity. As a result of the increased commitment, effort, and skill
level that ToT graduates bring to their responsibilities, the management within their
organizations look to them to contribute outside their areas of responsibility. Thus, while
position titles may not change, in the majority of cases, participants report that their
responsibilities and status within the organization have expanded as a result of their increased
skills and ability.
Following are examples of how participants have applied their skills in their organizations:
In addition to the eleven manuals produced during the ToT workshops, five
participants from ToT’95 and ToT’96 have designed, produced, and published
participatory training curricula and manuals for their organizations. These
manuals are used in training and cover topics ranging from Training of Business
Trainers, Training of Community Development Committees, and Training of
Teachers.
A female participant from the Ministry of Education was appointed acting head
of the Training Division. As one of her innovations, she designed and
implemented nationwide regional training workshops for promoters to introduce
new formats and mechanisms for lesson planning based directly on content
introduced during ToT training. As a result, literacy promoters now focus on
techniques, have a greater understanding of what they should do, recognize what
materials they need, and conduct more effective literacy lesson lessons.
A participant from ToT’97 was promoted to Head Trainer in a local NGO and
has integrated content learned during the ToT into the organization’s core training
offered to field workers. He incorporated some of the ToT’s more challenging
topics and tools for training design and analysis.
Staff of the Ministry of Education and two NGOs are working together in
TOT’97 to develop a training curriculum for English promoters from both
organizations to strengthen skills in planning participatory English lessons.
A primary product of this collaboration will be a collection of sample lesson
plans based on the current English language texts used in adult literacy
classrooms. Overall, networking between the ministry and literacy NGOs has
expanded considerably as a result of contact initiated through the ToT series.
Since the ultimate beneficiaries of this development intervention are the clients of the NGOs
(men and women living in rural and urban communities throughout Namibia), the real test of
training impact lies in the results found at this final level. Trainers report that the introduction
of participatory training skills empowers people to take actions that have a direct impact on
increased income, leadership, community development, and advocacy efforts.
A few specific examples follow:
Workers who are members of the Namibia Food and Allied Workers Union (NAFAU)
are demanding that their companies sign HIV/AIDS policy agreements with the Union
to protect the rights of workers.
Small Scale Entrepreneurs disenchanted with their inability to get loans from the banks
persistently approached their branch in Katutura and, with the help and support of
COSEDA, their credit support NGO, encouraged the bank to reconsider its policy on
loan size.
HIV/AIDS Committees are taking leadership and responsibility for mobilizing their
communities against HIV/AIDS. HIV/AIDS Community Educators are actively
contacting community members and interacting with traditional healers to change high-
risk practices and behaviors. The transition of control is sucessfully passing from the
initiating NGOs to the community bases as planned.
In general, analysis of such reports supports the contention that this new approach to training
has planted seeds which continue to grow and encompass broader areas. In the process, it
effectively supports the growth of democratic interactions and a strong civil society.
The numerous contractors and grantees involved in designing and implementing training
programs employ a wide variety of reporting procedures, mechanisms, and formats to record
training-related information. The lack of a consistent, standardized reporting practice has resulted
in duplicate or incomplete data, making it impossible to account accurately for training costs,
number of participants trained, or results achieved.
TraiNet, with greater applicability and fewer data entry requirements, will eliminate the
Participant Training Management System (PTMS); the Participant Data Form (PDF), the
Project Implementation Order/Participant (PIO/P), biodata, statement of expenditures, and budget
worksheets. As the information evolves through the various stages of training, the data will be
submitted and included in the TraiNet database. Development InfoStructure will receive this
information and update the web page on a regular basis, <www.devis.com/traiNet>.
Implementation: During Phase I (October 1996-February 1997), TraiNet was designed and
installed on the Intranet in USAID\Washington. Field testing started during Phase I and was
expanded during Phase II with visits to five Missions. Phase III—full installation—is scheduled
to start in January 1998 through a series of regional training workshops and country visits.
Working with G/HCD, Development InfoStructure will forward to stateside contractors and to all
Missions an information packet detailing the installation, use, and maintenance of the TraiNet
database. Contractors who do not receive this packet should contact Development InfoStructure at
(703) 525-6485 or <[email protected]>.
The importance of documenting and updating this information on a regular basis cannot be
overemphasized. As the Agency transitioned from project-level activity to strategic objectives,
much of the information crucial to conducting research remains in the field and can no longer be
accessed through the USAID Development Experience System database, the Agency’s central
repository of information. The performance tracking and reporting capabilities of TraiNet will
enable field and Washington staff to document and report activities, generate centralized reports,
as well as analyze and exchange information with the intent of enhancing efficiency and learning
from experience.
Aguirre International. 1994. Training Impact and Development: An Evaluation of the Impact of the
Cooperative Association of States for Scholarships (CASS) Program. HERNS Project. USAID.
Washington, DC. (PD-ABK-526)
AMEX International, Inc. 1995. Impact Evaluation Resources Guide. USAID, Washington, DC.
Brinkerhoff, Robert. 1989. Evaluating Training Programs in Business and Industry. Jossey-Bass Inc.,
San Francisco, California.
Brinkerhoff, Robert. 1987. Achieving Results from Training. Jossey-Bass Inc., San Francisco, CA.
Brinkerhoff, Robert, and Stephen Gill. 1994. The Learning Alliance. Jossey-Bass Inc., San
Francisco, California.
Broad, Mary and John Newstrom. 1992. Transfer of Training: Action Packed Strategies to Ensure
High Payoff from Training Investments. Addison-Wesley, Reading, Massachusetts.
Creative Associates International. 1995. How to Design a Country Training Strategy for its Impact
on Development. Washington, DC.
Gillies, John, A. 1992. Training for Development. Academy for Educational Development,USAID,
Washington, DC. (PN-ABL-295)
Kirkpatrick, Donald. 1994. Evaluating Training Programs, the Four Levels. Barrett-Koehler
Publishers, Inc., San Francisco, California.
Kirkpatrick, Donald. "Great Ideas Revisited." Training and Development. January 1996, pp.54-59.
Phillips, Jack. "Was it the Training?" Training and Development. March 1996, pp. 29-32.
Phillips, Jack. 1994. Measuring Return on Investment. Volume 1. American Society for Training and
Development, Alexandria, Virginia.
Phillips, Jack. "ROI: The Search for Best Practices." Training and Development. February 1996,
p.42-47.
United States Agency for International Development. 1996. Program Performance Monitoring and
Evaluation at USAID. PPC\CDIE\PME.
United States Agency for International Development. 1996. Selecting Performance Indicators.
TIPS #6. PPC/CDIE/PME. Washington, DC. (PN-ABY-214)
United States Agency for International Development. 1995. The Agency’s Strategic Framework and
Indicators, 1995-1996. PPC/CDIE/PME. Washington, DC. (PN-ABX-284)
United States Agency for International Development. 1994. Strategies for Sustainable Development,
Washington, DC. (PN-ABQ-636)
United States Agency for International Development. 1992. Program Overview: Education and
Human Resources Development, Latin America and the Caribbean. EHRTS Project.
Washington, DC. (PD-ABD-012)
United States Agency for International Development. 1992. Training for Development.
EHRTS Project. Washington, DC. (PN-ABL-295)
United States Agency for International Development. 1986. Review of Participant Training
Evaluation Studies. Evaluation Occasional Paper #11. Washington, DC. (PN-AAV-288)
www.astd.org - Site of the American Society for Training and Development, one of the leading
sources in the field of training and human resource development. Provides research, analysis and
practical information. See under Site Index for links to a wealth of resources.
www.shrm.org - Site of the Society for Human Resource Management; provides resources similar to
those of ASTD.