Models of Programme Eva Lu a It On
Models of Programme Eva Lu a It On
net/publication/352462214
CITATIONS READS
0 3,573
1 author:
SEE PROFILE
All content following this page was uploaded by Mohammad Rezaul Karim on 16 June 2021.
Abstract
The article aims at providing the systematic study of programme evaluation (PE) so that the
policy researchers can effectively evaluate any programme of the government and suggest
appropriately. This is a review paper based on secondary sources of data mainly related to
PE. The study describes the various components of PE model, methodology of PE and data
collection techniques. PE is usually defined as the process to be carried out to assist decision
makers to justify the effectiveness of the program whether this will be continued, rectified,
modified, and/or dropped on the basis of system analysis of interventions of program. There
are many approaches and models used in programme evaluation where CIPP-the acronym of
Context, Input, Process and Product- is most popular and widely utilized. This is the
comprehensive programme evaluation model that provides basis for taking effective decision
regarding programme. Context includes needs, problems, constraints, assets opportunities
linking to the goals of the programme, need for beneficiaries. Input assesses different
approaches, personnel planning, and budget. Process evaluates implantation of plan, work
schedule. Product assesses intended and unintended outcomes of program. The methodology
of CIPP model includes collection, organization, analysis and reporting of information. Thus,
it helps policy decision makers to examine and decide the fate of the programme based on
collected information.
Introduction
approaches for the policy evaluators so that they can identify the how the programme
evaluation should be carried to justify the fate of any programme.
The objective of the paper is to identify criteria of each component of model and procedures
of programme evaluation in order for producing a programme evaluation. This paper utilizes
the secondary sources of data collected from the books, journal articles on the programme
evaluation related to the said model. This is a review paper developed based on interpretative
approach. Theoretical approach has been taken from the CIPP model which is connected to
other research articles.
Evaluation is defined as the systematic approach that provides an introductory view of the
broad perspective of research activities essential for designing, implementing and appraising
the utility of any project and programme (Vedung, 1997). Programme evaluation is defined as
the systematic method of collecting, analysing, and using information to find out the
effectiveness and efficiency of projects, policies and programs (Chacón-Moscoso, et al.; Ho,
et al, 2011; Rossi and Freeman, 1989). The purpose of program evaluation is to examine why
and how the program is taken and fund is spent. It also explains the intended effect of
initiatives. Since it is a systematic process, it focuses on definition, objectives, actions,
activities to know the program could be improved and whether the program is worthwhile.
The process provides the best grounds of goals and objectives to make justified.
The Joint Committee (1994:3) defined evaluation as an evaluation standard that is a ‘principle
mutually agreed to by people engaged in professional price of evaluation, that, if met, will
enhance the quality and fairness of an evaluation’. The committee set the criteria of evaluation
standard to assess the program. They emphasize on utility, feasibility, prosperity and
accuracy. However, historically program evaluation started in 1792 in the field of education
and gradually develops this as a systematic model in 1967 contributed by Scriven,
Stufflebeam (also in 1971), and Stake. The professional age of program evaluation is the
decade of 1973 to 1983 where a lot happened in eighteen years after 1983 contributing to the
expansion and integration of the concept and later Stufflebeam developed the checklist for
evaluators as the fifth instalment of model (Shadish et al, 1991; Stufflebeam, et al, 2000;
Stufflebeam, 2007). There are two kinds of program evaluation; such as formative that
provides the continual feedback to assist in planning, developing and delivering a program or
service, and summative that serves consumers by providing them with independent
assessments comparing the costs, merits, worth and significance of competing programs or
products (Chotchakornpant, 2013; Stafflebeam, 2000).
If the circumstances are in place rightly, evaluators can justify the program and make
necessary recommendations. However, some programs are really difficult to be evaluated that
require systematic approach to meet the specific criteria. CIPP provides the structured and
appropriate basis for program evaluation. Evaluator, policy makers, decision makers can
evaluate the program following some steps. This is a concrete idea for program evaluation
based on context considered, inputs provided, processes followed and products expected to
obtain. Program evaluation brings accountability, intervention improvement or basic
knowledge improvement regarding the program where the first two are termed as most
eminent rationales as the third one comes as possible best side effect (Karim, 2014; Vedung,
1997).
Programme evaluation plays an important role in improving public policy making. It also
explains the ways of studying programme evaluation, how theory-based evaluation matters.
Furthermore, it posits the detailed implications of political culture of public programmes and
challenges for getting findings used for the design of the programme evaluation studies
(Newcome, 2015).
Models are structures to analyze and evaluate intended issue. To evaluate the program there
are some models used to justify the effectiveness of programs. Some scholars use approaches
to evaluate program which some other scholars term as models. Frequent approaches are goal
attainment, goal free evaluation, side effect, comprehensive evaluation, client-oriented,
stakeholder, policy commission, productivity, cost-effectiveness, cost-efficiency, and CIPP
(context, input, processes and products) model. The CIPP is regarded as model of evaluation
by Stufflebeam and others are treated as approaches for program evaluation whereas Vedung
(2008) terms all these as models. For more appropriate and detailed analysis, CIPP model has
been taken to explain here in this paper.
The model CIPP developed by Stuffebeam entails four basic components such as, context,
inputs, processes and products. Context evaluation assesses needs, problems, resources and
possible opportunities to assist decision makers to define the goals and priorities and to help
the concerned implementers evaluate those goals, priorities and possible outcomes (Filella-
Guin and Blanch-Plana, 2002; Stufflebeam and Shinkfied, 2007). Input evaluation evaluates
alternative approaches, competing action plans, staffing plan, financial plan for feasibility and
potential cost-effectiveness in order to meet focused needs and achieve goals. After the
context evaluation people involved in the decision making process utilize the input among the
competing plans, assign right persons for doing the right job, allocate budget according to the
requirement, allocate necessary resources, prepare the plan of total activities that ultimately
help staff to judge the decision. Context and input evaluation are the issues of planning of
program evaluation. Other two evaluation issues are related to program implementation. As
soon as the plan has been in place, implantation of plan is assessed in the process evaluation
that helps to perform the activities needed for program evaluation and provides grounds to
judge the implementation and outcomes (Stufflebeam and Shinkfied, 2007). It evaluates the
ongoing implementation process and helps to analyze the outcomes. The forth factor of CIPP
is product evaluation that assesses the various types of outcomes such as intended and
unintended, short-term and long-term which mechanism provides staff to achieve the
outcomes and ultimately meeting the targeted needs. Product evaluation comprises of impact,
effectiveness, sustainability and transportability. This kind of evaluation contributes whether
the beneficiaries are benefited from program, to what extend these are effectiveness, to what
extend these will be usable for future and whether these outcomes can be transported and
adapted by others users. The CIPP has core values linked with goals, plans, actions and
outcomes respectively to context, input, process and product. Goal setting tasks ask for
context evaluation to provide necessary information to justify goals of the program. Plans ask
for input to provide judgments for the improvement of plans. The third step is program
actions that provides the judgments for all activities, feedbacks, review to strengthen the
performance of staff. Finally the outcomes of program evaluation raise question for
evaluation of product. This provides the clear idea about the need assessment and judgments
of outcomes (stufflebeam and Shinkfield, 2007). This relationship is presented in diagram
below:
Figure-1: Key components of CIPP Evaluation Model and associated relationships with
program
Table-1: CIPP model in evaluation role with its purpose, task and method
Identify and diagnose the Diagnose problems or Judgment of experts and
problems or barriers which barriers. clients on barriers and
might inhibit achieving the problems
goals and objectives.
Judgment of experts and
clients on desired goals and
objectives
Input Design a program Develop a plan for a Use SWOT (strengths,
(intervention) to meet the program through weaknesses, opportunities,
objectives examination of various threats) approach
intervention
strategies: examining
strategies for achieving
the plan (time, fund,
potential barriers) and
capabilities and resources
of staff (expertise,
funding and barriers)
Determine the resources Develop a program
needed to deliver the implementation plan
program which considers time,
resources, and barriers to
overcome
Determine whether staff
and available resources are
adequate to implement the
program
Process Provide decision makers Identify discrepancies A staff member serves as the
with information necessary between actual evaluator
to determine if the program implementation and
needs to be accepted, intended design
amended, or terminated.
and external stakeholders’ interests. Process evaluation develops on-going evaluation of the
implementation of major strategies through various tactical programs to accept, refine, or
correct the program design. Product evaluation provides evaluation of the outcome of the
program to decide to accept, amend, or terminate the program, using criteria directly related to
the goals and objectives.
The CIPP model is the systematic approach of program evaluation that helps explain related
issues of each component of the model. This is a comprehensive guideline for program
evaluators. Description of each component has been stated below:
Context Evaluation
This is the first component of the model that assesses four issues such as needs, problems,
assets and opportunities. Context evaluation is the basis why the program has been
undertaken, what are the associated problems as well as ways of solving those problems with
prospects. Needs comprise of issues related to the fulfilment of purposes what to be achieved
as the mission of the institution and legal standards. Problems relate to the obstacles that
impede to overcome to encounter the targeted needs. Assets are the resources to be used for
program evaluation in order to address the purpose typically include the expertise, services
available in local area (Stufflebeam and Shinkfied, 2007). Opportunities are probable
supports to meet needs and get those effective solutions of those problems. These four
elements are deemed as important for program designing and service providing. Context
evaluation is designed to achieve some specific objectives related to mission and goals such
as, setting the scope of intended services, finding out real beneficiaries and their needs,
identifying problems and possible solutions, ascertaining necessary assets and funds to meet
targeted results, setting basis and clarity for goals, providing basis for the judgments of
expected outcomes and improvement of services (Stufflebeam and Shinkfied, 2007). Apart
from the meeting the goals issues of program, context evaluation addresses these
requirements following the standard methodologies starting from information collection to
analyze in order for providing the basis for evaluators. It covers the identifying stakeholders’
view, testing hypotheses regarding the services applying various techniques including
documents, interviews (Babbie, 2007; Stufflebeam and Shinkfied, 2007). Context evaluation
can be conducted before, during and after the project or program or any intervention.
Institution may carry out this evaluation before any program is implemented to examine
context in line with goals and on-going and after the program evaluation can be combined
with other three elements of CIPP to match and judge. For example, the Suicide Prevention
Program in Kaohsiung City, Taiwan was evaluated using the CIPP evaluation model where
they had analyzed the context that included population, area, city type and suicide rate (Ho et
al, 2011). For context evaluation evaluators are suggested to follow the following checklist
(Karim, 2014; Stufflebeam, 2007):
Input Evaluation
The purpose of input process is to assist the decision makers to find out the relevant
approaches, best alternatives, budgetary allocation and choose for execution of the program.
This evaluation process asks the question ‘how it should be done’ that provides the overall
idea about the environment of stakeholders, political environment with its problems and
prospects, legal framework and constraints, availability of resources so that evaluator can
judge most effective process to help the decision makers to comply the right decision
regarding the program (Stufflebeam , 1971; Stufflebeam and Shinkfied, 2007; Zhang, et al,
2011). It allows utilizing the potentials of existing employees what expertise they have. The
necessity of finding out the alternative strategies is crucial to meet the client needs which can
be addressed based on evaluator’s report. This strategies need plan of actions and necessary
budgets and resources. Input evaluation will provide clear picture and judgemental basis for
using this resources and financial issues whether the program will be successful or failed. The
best evaluation indicates how the best output can be achieved using minimum resources. Input
evaluation can be conducted in different stages starting from the identifying specific needs
and objectives which can be congregated through literature review, visiting programs, expert
opinions, secondary sources information, initial proposals. These all are about information
collection. Relevant and authentic information provide basis for effective evaluation.
Stakeholder’s view and needs of the clients must be evaluated to find the best-fit of the
program outcomes. Evaluators should think the needs of targeted beneficiaries and prepare
approaches to make responsiveness to the targeted organizational problems that require
proper planning. All the resources to be used must be cost-effective, politically viable,
administratively feasible, effectively fit in operation. The evaluators should provide the best
advice to the decision makers for best solution where obtaining the best compatible proposal
and rating criteria are crucial. Input evaluation also provides decision makers how the
institution can combine their best features. It links with information accumulation for
designing and preparing budget in order to combine strategy and action plan (Stufflebeam and
Shinkfied, 2007). Input evaluation has several applications. Researchers also utilized this
evaluation for evaluation government programs. For example, Ho et al. (2011) identified
project, employees and finance and analysed critically in ‘the Suicide Prevention Program in
Kaohsiung City, Taiwan was evaluated using the CIPP evaluation model’.
For input evaluation, evaluators are suggested to follow the following checklist (Stufflebeam,
2007):
Activity Actor Necessary steps
to be followed
Identify and investigate existing programs that Policy Analyst, Collecting
could serve as a model for the contemplated Researcher primary
program. information
Assess the program’s proposed strategy for Policy Need
responsiveness to assessed needs and Researchers assessment
feasibility.
Assess the program’s budget for its sufficiency Policy Analyst, Budgeting
to fund the needed work. Researcher
Assess the program’s strategy against pertinent Policy Analyst, Strategy
research and development literature. Researcher,
Policy evaluator
Assess the merit of the program’s strategy Policy Analyst, Comparison
compared with alternative strategies found in Researcher
similar programs.
Assess the program’s work plan and schedule Policy Analyst, progress
for sufficiency, feasibility, and political Policy evaluator
viability.
Compile a draft input evaluation report and Policy Analyst, Draft report
send it to the client and agreed-upon Researcher
stakeholders.
Discuss input evaluation findings in a feedback Policy Analyst, Anlaysis of
workshop. Policy findings
Evaluator
Finalize the input evaluation report and Policy Analyst, Finalizing
associated visual aids and provide them to the Researcher report
client and agreed-upon stakeholders.
Assess the merit of the program’s strategy Policy Owner Future direction
compared with alternative strategies found in
similar programs.
Process Evaluation
collection methods where they have be cautious about the inherent problems of respective
methods so that the report might not be affected. It is suggested that data collection activities
should carried out in unobtrusive way so that that might not create any obstacle or
interference with the process and gradually they can use the structured approach as the rapport
develops. The process evaluation is designed to help the staff by preparing the report
summarizing data, findings and observed issues that staff should address the problem. It is
generally placed in at the staff meeting requesting the staff’s director to preside over the
meeting. This process is a continuous checking and development where staff and evaluators
share, take notes, analyze and match with next intervention and place the report again to help
the staff for further development of the program. The process evaluation is continuous
learning procedures for staff and evaluators that help to rectify problems, ensure quality,
maintain sustainability. It is carried out sporadically by the staff and integrated with the
surrounding environment. The checking of activities helps finding out the deviations,
variations, difference regarding different people, groups or sites (Stufflebeam and Shinkfield,
2007). Since evaluator sketches the picture of process evaluation with necessary and relevant
issues the report helps to interpret the potential outcomes whether these outcomes of the
program would be effective for the clients, to what extent the best services can produce for
the beneficiaries. Process emphasizes on implementation that requires pre, on-going
procedures for the program. For example, Ho et al. (2000) identified psycho-education
activities and suicide-prevention sheets, promotion of program website, monitoring suicide
data, 24-hour crisis line, gate-keeper training, follow-up visit, medical resources referral in
‘the Suicide Prevention Program in Kaohsiung City, Taiwan was evaluated using the CIPP
evaluation model’ which are related to process evaluation and intervention on on-going
program and potential outcomes. Vedung (1997) proposes a guideline to follow to check the
explanatory factors in process evaluation that consists of six issues such as historical
background of the intervention, intervention design, implementation, addressee response,
other government interventions, and issue networks and other environment (details in
appendix-1). These possible contingencies- explanatory factors, independent variables, and
determinants may interrupt policy, program, projects and their intervention outcomes.
For input evaluation evaluators are suggested to follow the following checklist (Stufflebeam,
2007):
progress Periodically draft written reports on process evaluation Policy analyst
findings and provide the draft reports to the client and
agreed-upon stakeholders.
Discussing Presenting and discussing process evaluation findings in Policy researcher,
findings feedback workshops. Policy owner
Finalizing Finalizing each process evaluation report (possibly Policy evaluator,
report incorporated into a larger report) and associated visual Policy owner
aids and provide them to the client and agreed-upon
stakeholders.
Product Evaluation
The forth component of CIPP is product evaluation that attempts to measure, interpret and
judge an enterprise’s achievements where the main objectives are to evaluate to what extent
the needs of beneficiaries are met, what are the intended and unintended outcomes either
positive or negative. Outcomes may be short-term, intermediate or long-term basis where
evaluator emphasizes on third one.
As product evaluation is done to help decide whether a program, project or service is worth
continuing, repeating or extending or modifying or replacing, reporting on this process is vital
which may be placed as interim to address the targeted needs or end-report to utilize for
modification or replicable for other similar projects. The follow-up report is also suggested to
submit to assess the long-term outcomes. The reports typically include the results of assessed
needs, cost-incurred, execution plan and analysis of results based on groups and individuals.
The detailed report will guide the decision makers for the next activities to be followed based
on various issues found in the process. This report will bring accountability as well at all
levels. However, it is warned that premature report might cause negative effect on on-going
project.
Evaluators can follow the following guidelines for product evaluation particularly for
evaluation of intended and intended consequences of outcomes i.e. impact (Stufflebeam,
2007):
CIPP and Relevance to Formative and Summative Evaluation Roles
The CIPP model is developed based on learning by doing and is an ongoing process in order
to take corrective measures to develop the program. Four components of CIPP model have
direct relevancy with the formative and summative program evaluation which is presented in
tabular form below:
Rossi and Freeman (1989) proposes the systematic approach of evaluation guiding to program
evaluation that starts from the defining the evaluation and program evaluation and explaining
diagnostic procedures of evaluation. Diagnostic procedure covers the need assessment and
potential targets and risks. The next step follows the tailoring the program that check on-
going projects in terms of accountability, adjusting program procedures to increase
effectiveness, program designing, comparison of goals and objectives. Then program
monitoring that tries to check whether program can reach to target group, whether the services
are consistent and the utility of resources. In this systematic evaluation approach, it comes the
impact assessment with it strategies to be followed, randomized and non-randomized
designed to check the effectiveness of the program. It also covers the efficiency of resources
following cost-benefit and cost-effectiveness analysis. Rossi and Freeman’s systematic
approach and Stufflebeam’s CIPP model for program evaluation are quite similar in the
program evaluation system. CIPP in the name of context, input, process and product analyzes
needs, problems, opportunities, work plans, targeted benefits, implementation plans as well as
impact in order to show the effectiveness of the program. In both cases plan and procedural
issues are more or less same.
Historically CIPP model was initiated to evaluate educational program. It is still popularly
used in this field to assess the education standard, curriculum development. However, this
model has become so popular and effective instrument that is frequently used in evaluate
programs, projects, services or other enterprises, for example, self-help housing project in the
USA, the Suicide Prevention Program in Kaohsiung City in Taiwan, Thai–Lao Collaborating
Nursing Manpower Development Project (Othaganont, 2001; Ho, et al., 2000; Stufflebeam
and Shinkfield, 2007).
Conclusion
The CIPP model is viewed as a systematic social approach to programme evaluation that
synthesizes all interrelated activities leading together to achieve defined goals in order to help
administration and management to assess the appropriateness of the programme, to find out
effective mechanisms to improve delivery to interventions, to meet the assessed needs or meet
the accountability requirements of funding groups (Rossi and Freeman, 1989; Stuffebeam and
Shinkfield, 2007; Vedung, 1997). It is a systematic evaluation model that analyzes context of
program i.e. programme’s vision, mission, objectives; input i.e. fund, staffing plans,
competing actions; process of the program i.e. implementation plan and intended outcomes in
order to assist the decision makers to decide about the requirements of the program which
ultimately brings the effectiveness of the intervention. However, evaluators should remember
all the possible weakness of methodology particularly pitfalls so that this might not affect the
evaluation report as researchers suggest that strong methodological integrity is critical to
measure the programme and outcomes (Hatry and Newcomer, 2004).
Reference:
Babbie, E. (2007). The Practice of Social Research. 11th ed. Belmont: Thomson Higher
Education.
Eddy, R. M., & Berry, T. (2009). The Evaluator’s Role in Recommending Program Closure A
Model for Decision Making and Professional Responsibility. American Journal of
Evaluation, 30(3), 363-376.
Ho, W. W., Chen, W. J., Ho, C. K., Lee, M. B., Chen, C. C., & Chou, F. H. C. (2011).
Evaluation of the suicide prevention program in Kaohsiung city, Taiwan, using the CIPP
evaluation model. Community mental health journal, 47(5), 542-550.
Joint Committee on Standards for Educational Evaluation (1994). The program evaluation
standards. Thousand Oaks. CA: Corwin Press
Jones, B., Hopkins, G., Wherry, S.A., Lueck, C.J., Das, C.P. and Dugdale, P., 2016.
Evaluation of a regional Australian nurse-led parkinson's service using the context, input,
process, and product evaluation model. Clinical Nurse Specialist, 30(5), pp.264-270.
Karatas, H. and Fer, S., 2009. Evaluation of English curriculum at Yildiz Technical
University using CIPP model. Egitim ve Bilim, 34(153), p.47.
Karim, M. R. 2014, Stufflebeam’s Content, Input, Process and Product Model: An Analysis
and Interpretation in Public Policy Making, Unpublished document submitted to the National
Institute of Development Administration, Thailand.
Khalid, M., Ashraf, M. and Rehman, C.A., 2012. Exploring the link between Kirkpatrick
(KP) and context, input, process and product (CIPP) training evaluation models, and its effect
on training evaluation in public organizations of Pakistan. African Journal of Business
Management, 6(1), pp.274-279.
Newcomer, K.E. (2015), Carol H. Weiss, Evaluation Research: Methods for studying
Programs and Policies, in SJ Balla, M Lodge & EC Page (eds), The Oxford Handbook of
Classics in Public Policy and Administration, Oxford, Oxford University Press.
Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic Approach.
Sage.
Schulberg, H. C., & Baker, F. (1968). Program evaluation models and the implementation of
research findings. American Journal of Public Health and the Nations Health, 58(7), 1248-
1255. (available at http://ajph.aphapublications.org/doi/pdf/10.2105/AJPH.58.7.1248)
Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of Program Evaluation:
Theories of Practice. Sage.
Stufflebeam, D. L. (1971). The Relevance of the CIPP Evaluation Model for Educational
Accountability. Paper presented at the Annual meeting of the American Association of School
Administrators (Atlantic City, N.J., February 24, 1971) (available at
http://eric.ed.gov/?id=ED062385)
Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation Theory, Models, and Applications.
San Francisco: John Wiley & Sons, Inc.
Stufflebeam, D.L., 2000. The CIPP model for evaluation. In Evaluation models (pp. 279-317).
Springer, Dordrecht.
Stufflebeam, D. L., Madaus, G. F., & Kellaghan, T. (Eds.). (2000). Evaluation models:
Viewpoints on educational and human services evaluation (Vol. 49). Springer.
Tokmak, H.S., Baturay, H.M. and Fadde, P., 2013. Applying the context, input, process,
product evaluation model for evaluation, research, and redesign of an online master’s
program. The International Review of Research in Open and Distributed Learning, 14(3),
pp.273-293.
Tseng, K.H., Diez, C.R., Lou, S.J., Tsai, H.L. and Tsai, T.S., 2010. Using the Context, Input,
Process and Product model to assess an engineering curriculum. World Transactions on
Engineering and Technology Education, 8(3), pp.256-261.
Vedung, E. (1997). Public Policy and Program Evaluation. New Brunswick (U.S.A.) and
London (UK) Transaction Books.
Wholey, J.S.; Hatry, H.P. and Newcomer, K. E. (2004). Handbook of Practical Program
Evaluation (2nd edn). San Francisco: John Wiley & Sons, Inc.
Worthen, B. R., Sanders, J. R., & Fitzpatrick, J. L. (1997). Program Evaluation: Alternative
Approaches and Practical Guideline, New York: Longman. Available at
https://web.utk.edu/~cdavis80/EP521/readings/Worthen1.pdf
Zhang, G., Zeller, N., Griffith, R., Metcalf, D., Williams, J., Shea, C., & Misulis, K. (2011).
Using the Context, Input, Process, and Product Evaluation Model (CIPP) as a comprehensive
framework to guide the planning, implementation, and assessment of service-learning
programs. Journal of Higher Education Outreach and Engagement, 15(4), 57-84.
B. Intervention design
1. Clarity (linguistic obscurity, several options for action)
2. Technical complexity
3. Validity of intervention theory
C. Implementation
1. National agencies: comprehension, capability, willingness (public choice,
mismatch, capture)
2. Formal intermediates
3. Street-level bureaucracy: coping strategies, capture, mismatch
4. Addressee participation
D. Addressee response
1. Comprehension, capability, willingness
2. Formative moments
3. Zealots
4. Camouflage
5. Resistance, free riders
Source: Vedung, E. (1997). Public Policy and Program Evaluation. New Brunswick
(U.S.A.) and London (UK)Transaction Books., pp212