0% found this document useful (0 votes)
2 views

Models of Programme Eva Lu a It On

The document discusses the CIPP (Context, Input, Process, Product) model for program evaluation, emphasizing its systematic approach to assess the effectiveness of government programs. It outlines the components of the CIPP model, including context evaluation, input evaluation, process evaluation, and product evaluation, and how they contribute to informed decision-making. The paper serves as a guide for policy researchers to effectively evaluate programs and make recommendations based on comprehensive data analysis.

Uploaded by

Vera Gobe
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Models of Programme Eva Lu a It On

The document discusses the CIPP (Context, Input, Process, Product) model for program evaluation, emphasizing its systematic approach to assess the effectiveness of government programs. It outlines the components of the CIPP model, including context evaluation, input evaluation, process evaluation, and product evaluation, and how they contribute to informed decision-making. The paper serves as a guide for policy researchers to effectively evaluate programs and make recommendations based on comprehensive data analysis.

Uploaded by

Vera Gobe
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/352462214

Programme Evaluation: An Analysis of Context, Input, Process and Product


(CIPP) Model

Preprint · June 2021


DOI: 10.13140/RG.2.2.33477.99040

CITATIONS READS

0 3,573

1 author:

Mohammad Rezaul Karim


Bangladesh Public Administration Training Centre
85 PUBLICATIONS 260 CITATIONS

SEE PROFILE

All content following this page was uploaded by Mohammad Rezaul Karim on 16 June 2021.

The user has requested enhancement of the downloaded file.


Programme Evaluation: An Analysis of Context, Input, Process and Product (CIPP)
Model

Dr. Mohammad Rezaul Karim

Faculty Member of Bangladesh Public Administration Training Centre, Savar, Dhaka-1343,


Bangladesh, Email:[email protected]

Abstract

The article aims at providing the systematic study of programme evaluation (PE) so that the
policy researchers can effectively evaluate any programme of the government and suggest
appropriately. This is a review paper based on secondary sources of data mainly related to
PE. The study describes the various components of PE model, methodology of PE and data
collection techniques. PE is usually defined as the process to be carried out to assist decision
makers to justify the effectiveness of the program whether this will be continued, rectified,
modified, and/or dropped on the basis of system analysis of interventions of program. There
are many approaches and models used in programme evaluation where CIPP-the acronym of
Context, Input, Process and Product- is most popular and widely utilized. This is the
comprehensive programme evaluation model that provides basis for taking effective decision
regarding programme. Context includes needs, problems, constraints, assets opportunities
linking to the goals of the programme, need for beneficiaries. Input assesses different
approaches, personnel planning, and budget. Process evaluates implantation of plan, work
schedule. Product assesses intended and unintended outcomes of program. The methodology
of CIPP model includes collection, organization, analysis and reporting of information. Thus,
it helps policy decision makers to examine and decide the fate of the programme based on
collected information.

Keywords: Programme Evaluation, Context, Input, Process, Product, Formative evaluation,


summative evaluation

Introduction

Programme evaluation is deemed as an important process because of providing necessary


information for the decision makers to analyze and decide about the effectiveness of a
particular programme. This process explains effective procedures with necessary issues of
goals, objectives, problems, constraints, strategies, financial factors and effectiveness and
intended outcomes of the program so that decision makers can determine the fate of the
programme. The ultimate purpose of the programme is to meet the stakeholders’ need. The
context, input, process and product, popularly known as CIPP model, introduced by
Stuflebeam is considered as the pioneer programme evaluation model. This evaluation
process justifies the programme providing the systematic ways where CIPP is most popular
and widely used model among other programme evaluation models. By utilising the model a
comprehensive idea can be obtained on every aspect related to programme. CIPP provides
the basic criteria for the policy evaluators which is widely used for example, education
programme, engineering programme, nursing and health care, curriculum designing and
modeling, managerial training (Warju, 2016). This paper aims at providing the systematic
 

 
approaches for the policy evaluators so that they can identify the how the programme
evaluation should be carried to justify the fate of any programme.

Objectives and methodology

The objective of the paper is to identify criteria of each component of model and procedures
of programme evaluation in order for producing a programme evaluation. This paper utilizes
the secondary sources of data collected from the books, journal articles on the programme
evaluation related to the said model. This is a review paper developed based on interpretative
approach. Theoretical approach has been taken from the CIPP model which is connected to
other research articles.

Evaluation and Programme Evaluation

Evaluation is defined as the systematic approach that provides an introductory view of the
broad perspective of research activities essential for designing, implementing and appraising
the utility of any project and programme (Vedung, 1997). Programme evaluation is defined as
the systematic method of collecting, analysing, and using information to find out the
effectiveness and efficiency of projects, policies and programs (Chacón-Moscoso, et al.; Ho,
et al, 2011; Rossi and Freeman, 1989). The purpose of program evaluation is to examine why
and how the program is taken and fund is spent. It also explains the intended effect of
initiatives. Since it is a systematic process, it focuses on definition, objectives, actions,
activities to know the program could be improved and whether the program is worthwhile.
The process provides the best grounds of goals and objectives to make justified.

The Joint Committee (1994:3) defined evaluation as an evaluation standard that is a ‘principle
mutually agreed to by people engaged in professional price of evaluation, that, if met, will
enhance the quality and fairness of an evaluation’. The committee set the criteria of evaluation
standard to assess the program. They emphasize on utility, feasibility, prosperity and
accuracy. However, historically program evaluation started in 1792 in the field of education
and gradually develops this as a systematic model in 1967 contributed by Scriven,
Stufflebeam (also in 1971), and Stake. The professional age of program evaluation is the
decade of 1973 to 1983 where a lot happened in eighteen years after 1983 contributing to the
expansion and integration of the concept and later Stufflebeam developed the checklist for
evaluators as the fifth instalment of model (Shadish et al, 1991; Stufflebeam, et al, 2000;
Stufflebeam, 2007). There are two kinds of program evaluation; such as formative that
provides the continual feedback to assist in planning, developing and delivering a program or
service, and summative that serves consumers by providing them with independent
assessments comparing the costs, merits, worth and significance of competing programs or
products (Chotchakornpant, 2013; Stafflebeam, 2000).

Significance of Programme Evaluation

The main purpose of programme evaluation is to find the justification of effectiveness of


programs. Literature suggests that in an ideal situation an evaluation is carried out to examine
the merit of the program offers and potential the impact on the stakeholders particularly
ultimate beneficiaries (Eddy & Berry, 2009; Schulberg & Baker, 1968; Worthen, et al., 1997).
 

 
If the circumstances are in place rightly, evaluators can justify the program and make
necessary recommendations. However, some programs are really difficult to be evaluated that
require systematic approach to meet the specific criteria. CIPP provides the structured and
appropriate basis for program evaluation. Evaluator, policy makers, decision makers can
evaluate the program following some steps. This is a concrete idea for program evaluation
based on context considered, inputs provided, processes followed and products expected to
obtain. Program evaluation brings accountability, intervention improvement or basic
knowledge improvement regarding the program where the first two are termed as most
eminent rationales as the third one comes as possible best side effect (Karim, 2014; Vedung,
1997).

Programme evaluation plays an important role in improving public policy making. It also
explains the ways of studying programme evaluation, how theory-based evaluation matters.
Furthermore, it posits the detailed implications of political culture of public programmes and
challenges for getting findings used for the design of the programme evaluation studies
(Newcome, 2015).

Models of Evaluation used in policy studies

Models are structures to analyze and evaluate intended issue. To evaluate the program there
are some models used to justify the effectiveness of programs. Some scholars use approaches
to evaluate program which some other scholars term as models. Frequent approaches are goal
attainment, goal free evaluation, side effect, comprehensive evaluation, client-oriented,
stakeholder, policy commission, productivity, cost-effectiveness, cost-efficiency, and CIPP
(context, input, processes and products) model. The CIPP is regarded as model of evaluation
by Stufflebeam and others are treated as approaches for program evaluation whereas Vedung
(2008) terms all these as models. For more appropriate and detailed analysis, CIPP model has
been taken to explain here in this paper.

Overview of Context, Input, Process and Product (CIPP) Model

The model CIPP developed by Stuffebeam entails four basic components such as, context,
inputs, processes and products. Context evaluation assesses needs, problems, resources and
possible opportunities to assist decision makers to define the goals and priorities and to help
the concerned implementers evaluate those goals, priorities and possible outcomes (Filella-
Guin and Blanch-Plana, 2002; Stufflebeam and Shinkfied, 2007). Input evaluation evaluates
alternative approaches, competing action plans, staffing plan, financial plan for feasibility and
potential cost-effectiveness in order to meet focused needs and achieve goals. After the
context evaluation people involved in the decision making process utilize the input among the
competing plans, assign right persons for doing the right job, allocate budget according to the
requirement, allocate necessary resources, prepare the plan of total activities that ultimately
help staff to judge the decision. Context and input evaluation are the issues of planning of
program evaluation. Other two evaluation issues are related to program implementation. As
soon as the plan has been in place, implantation of plan is assessed in the process evaluation
that helps to perform the activities needed for program evaluation and provides grounds to
judge the implementation and outcomes (Stufflebeam and Shinkfied, 2007). It evaluates the
ongoing implementation process and helps to analyze the outcomes. The forth factor of CIPP
 

 
is product evaluation that assesses the various types of outcomes such as intended and
unintended, short-term and long-term which mechanism provides staff to achieve the
outcomes and ultimately meeting the targeted needs. Product evaluation comprises of impact,
effectiveness, sustainability and transportability. This kind of evaluation contributes whether
the beneficiaries are benefited from program, to what extend these are effectiveness, to what
extend these will be usable for future and whether these outcomes can be transported and
adapted by others users. The CIPP has core values linked with goals, plans, actions and
outcomes respectively to context, input, process and product. Goal setting tasks ask for
context evaluation to provide necessary information to justify goals of the program. Plans ask
for input to provide judgments for the improvement of plans. The third step is program
actions that provides the judgments for all activities, feedbacks, review to strengthen the
performance of staff. Finally the outcomes of program evaluation raise question for
evaluation of product. This provides the clear idea about the need assessment and judgments
of outcomes (stufflebeam and Shinkfield, 2007). This relationship is presented in diagram
below:

Figure-1: Key components of CIPP Evaluation Model and associated relationships with
program

Source: Stufflebeam and Shinkfield, 2007


The CIPP evaluation can be described according to the purpose, task and methods used for
each case shown below:

Table-1: CIPP model in evaluation role with its purpose, task and method

Evaluation Purpose Task Method


Context Define the characteristics Define the environment, Conceptual analysis to define
of the environment; both actual and desired; limits of population to be
served
Determine general goals Define unmet needs and Empirical studies to define
and specific objectives; unused opportunities; unmet needs and unused
opportunities
 

 
Identify and diagnose the Diagnose problems or Judgment of experts and
problems or barriers which barriers. clients on barriers and
might inhibit achieving the problems
goals and objectives.
Judgment of experts and
clients on desired goals and
objectives
Input Design a program Develop a plan for a Use SWOT (strengths,
(intervention) to meet the program through weaknesses, opportunities,
objectives examination of various threats) approach
intervention
strategies: examining
strategies for achieving
the plan (time, fund,
potential barriers) and
capabilities and resources
of staff (expertise,
funding and barriers)
Determine the resources Develop a program
needed to deliver the implementation plan
program which considers time,
resources, and barriers to
overcome
Determine whether staff
and available resources are
adequate to implement the
program
Process Provide decision makers Identify discrepancies A staff member serves as the
with information necessary between actual evaluator
to determine if the program implementation and
needs to be accepted, intended design
amended, or terminated.

Identify defects in the This person monitors and


design or implementation keeps data on setting
plan conditions, program elements
as they actually occurred
this person gives feedback on
discrepancies and defects to
the decision makers
Product Decide to accept, amend, Develop the assessment Traditional research methods,
or terminate the program of the program multiple measures of
objectives, and other methods
It is seen that context evaluation provides information for the development and evaluation of
mission, vision, values, goals and objectives, and priorities. Input evaluation provides
information for the development of program designs through evaluation of data bases, internal

 
and external stakeholders’ interests. Process evaluation develops on-going evaluation of the
implementation of major strategies through various tactical programs to accept, refine, or
correct the program design. Product evaluation provides evaluation of the outcome of the
program to decide to accept, amend, or terminate the program, using criteria directly related to
the goals and objectives.

Analysis and Interpretation of CIPP Model

The CIPP model is the systematic approach of program evaluation that helps explain related
issues of each component of the model. This is a comprehensive guideline for program
evaluators. Description of each component has been stated below:

Context Evaluation

This is the first component of the model that assesses four issues such as needs, problems,
assets and opportunities. Context evaluation is the basis why the program has been
undertaken, what are the associated problems as well as ways of solving those problems with
prospects. Needs comprise of issues related to the fulfilment of purposes what to be achieved
as the mission of the institution and legal standards. Problems relate to the obstacles that
impede to overcome to encounter the targeted needs. Assets are the resources to be used for
program evaluation in order to address the purpose typically include the expertise, services
available in local area (Stufflebeam and Shinkfied, 2007). Opportunities are probable
supports to meet needs and get those effective solutions of those problems. These four
elements are deemed as important for program designing and service providing. Context
evaluation is designed to achieve some specific objectives related to mission and goals such
as, setting the scope of intended services, finding out real beneficiaries and their needs,
identifying problems and possible solutions, ascertaining necessary assets and funds to meet
targeted results, setting basis and clarity for goals, providing basis for the judgments of
expected outcomes and improvement of services (Stufflebeam and Shinkfied, 2007). Apart
from the meeting the goals issues of program, context evaluation addresses these
requirements following the standard methodologies starting from information collection to
analyze in order for providing the basis for evaluators. It covers the identifying stakeholders’
view, testing hypotheses regarding the services applying various techniques including
documents, interviews (Babbie, 2007; Stufflebeam and Shinkfied, 2007). Context evaluation
can be conducted before, during and after the project or program or any intervention.
Institution may carry out this evaluation before any program is implemented to examine
context in line with goals and on-going and after the program evaluation can be combined
with other three elements of CIPP to match and judge. For example, the Suicide Prevention
Program in Kaohsiung City, Taiwan was evaluated using the CIPP evaluation model where
they had analyzed the context that included population, area, city type and suicide rate (Ho et
al, 2011). For context evaluation evaluators are suggested to follow the following checklist
(Karim, 2014; Stufflebeam, 2007):

Table  1:  Process  of  Context  Evaluation  

Collecting Conducting Evaluating Engaging Checking & Disseminating Preparing


background interview Programme evaluators & adjustment findings final report
information
c c
goal
c
data collectors
c c  
Context Evaluation Process
 
 

Activity Process Actors


Background Compile and assess background information, Policy Analyst
information especially on the intended beneficiaries’ needs and
assets.
Conducting Interview program leaders to review and discuss their Policy Analyst
interview perspectives on beneficiaries’ needs and to identify Researcher,
any problems (political or otherwise) the program will Data collector
need to solve.

Interview other stakeholders to gain further insight


into the needs and assets of intended beneficiaries and
potential problems for the programme.
Evaluating Assess program goals in light of beneficiaries’ Policy
programme assessed needs and potentially useful assets. to Analysis
goal monitor and record data on the program’s program.
Engaging Engage an evaluator environment, including related Policy analyst
evaluator programs, area resources, area needs and problems,
and political dynamics.
Engaging Request that program staff regularly make available to Policy Analyst
staff for data the evaluation team information they collect data on Researcher,
collection the program’s beneficiaries and environment. Data collector
Checking and Annually, or as appropriate, prepare and deliver to the Policy Analyst
adjustment client and agreed-upon stakeholders a draft context Researcher,
evaluation report providing an update on program-
related needs, assets, and problems, along with an
assessment of the program’s goals and priorities.
Dissemination Discuss context evaluation findings in feedback Policy Analyst
workshops presented about annually to the client and
designated audiences.
Final Report Finalize context evaluation reports and associated Policy Analyst
visual aids and provide them to the client and agreed-
upon stakeholders.

Input Evaluation

The purpose of input process is to assist the decision makers to find out the relevant
approaches, best alternatives, budgetary allocation and choose for execution of the program.
This evaluation process asks the question ‘how it should be done’ that provides the overall
idea about the environment of stakeholders, political environment with its problems and
prospects, legal framework and constraints, availability of resources so that evaluator can
judge most effective process to help the decision makers to comply the right decision
regarding the program (Stufflebeam , 1971; Stufflebeam and Shinkfied, 2007; Zhang, et al,

 
2011). It allows utilizing the potentials of existing employees what expertise they have. The
necessity of finding out the alternative strategies is crucial to meet the client needs which can
be addressed based on evaluator’s report. This strategies need plan of actions and necessary
budgets and resources. Input evaluation will provide clear picture and judgemental basis for
using this resources and financial issues whether the program will be successful or failed. The
best evaluation indicates how the best output can be achieved using minimum resources. Input
evaluation can be conducted in different stages starting from the identifying specific needs
and objectives which can be congregated through literature review, visiting programs, expert
opinions, secondary sources information, initial proposals. These all are about information
collection. Relevant and authentic information provide basis for effective evaluation.
Stakeholder’s view and needs of the clients must be evaluated to find the best-fit of the
program outcomes. Evaluators should think the needs of targeted beneficiaries and prepare
approaches to make responsiveness to the targeted organizational problems that require
proper planning. All the resources to be used must be cost-effective, politically viable,
administratively feasible, effectively fit in operation. The evaluators should provide the best
advice to the decision makers for best solution where obtaining the best compatible proposal
and rating criteria are crucial. Input evaluation also provides decision makers how the
institution can combine their best features. It links with information accumulation for
designing and preparing budget in order to combine strategy and action plan (Stufflebeam and
Shinkfied, 2007). Input evaluation has several applications. Researchers also utilized this
evaluation for evaluation government programs. For example, Ho et al. (2011) identified
project, employees and finance and analysed critically in ‘the Suicide Prevention Program in
Kaohsiung City, Taiwan was evaluated using the CIPP evaluation model’.

For input evaluation, evaluators are suggested to follow the following checklist (Stufflebeam,
2007):

Table  2:  Step  by  step  process  of  Input  Evaluation  

 
Activity Actor Necessary steps
to be followed
Identify and investigate existing programs that Policy Analyst, Collecting
could serve as a model for the contemplated Researcher primary
program. information
Assess the program’s proposed strategy for Policy Need
responsiveness to assessed needs and Researchers assessment
feasibility.
Assess the program’s budget for its sufficiency Policy Analyst, Budgeting
to fund the needed work. Researcher
Assess the program’s strategy against pertinent Policy Analyst, Strategy
research and development literature. Researcher,
Policy evaluator
Assess the merit of the program’s strategy Policy Analyst, Comparison
compared with alternative strategies found in Researcher
similar programs.
Assess the program’s work plan and schedule Policy Analyst, progress
for sufficiency, feasibility, and political Policy evaluator
viability.
Compile a draft input evaluation report and Policy Analyst, Draft report
send it to the client and agreed-upon Researcher
stakeholders.
Discuss input evaluation findings in a feedback Policy Analyst, Anlaysis of
workshop. Policy findings
Evaluator
Finalize the input evaluation report and Policy Analyst, Finalizing
associated visual aids and provide them to the Researcher report
client and agreed-upon stakeholders.
Assess the merit of the program’s strategy Policy Owner Future direction
compared with alternative strategies found in
similar programs.

Process Evaluation

The process evaluation is an ongoing checking procedure of intervention consequences,


intended effects, null effects, side effects in order to provide feedback to the decision makers
about the carrying out planned activities whether these activities are done appropriately and
efficiently maintaining budgetary procedure (Stufflebeam, 1971; Stufflebeam and Shinkfield,
2007; Vedung, 1997). Process analysis concentrates program evaluation in the political,
administrative, social and geographical surroundings that helps decision makers to judge the
effectiveness of the program. Typically it covers several functions such as monitoring and
documenting intervention’s activities, relevant strategy, activity plans, budget, possible
service delivery to the beneficiaries, hiring and training staff, supervising staff, conducting
meetings, monitoring work-flow, maintaining equipment, controlling expenditure,
information dissemination (Stufflebeam and Shinkfield, 2007; Vedung, 1997). Evaluators will
utilize both qualitative and quantitative data following different tools and techniques of data

 
collection methods where they have be cautious about the inherent problems of respective
methods so that the report might not be affected. It is suggested that data collection activities
should carried out in unobtrusive way so that that might not create any obstacle or
interference with the process and gradually they can use the structured approach as the rapport
develops. The process evaluation is designed to help the staff by preparing the report
summarizing data, findings and observed issues that staff should address the problem. It is
generally placed in at the staff meeting requesting the staff’s director to preside over the
meeting. This process is a continuous checking and development where staff and evaluators
share, take notes, analyze and match with next intervention and place the report again to help
the staff for further development of the program. The process evaluation is continuous
learning procedures for staff and evaluators that help to rectify problems, ensure quality,
maintain sustainability. It is carried out sporadically by the staff and integrated with the
surrounding environment. The checking of activities helps finding out the deviations,
variations, difference regarding different people, groups or sites (Stufflebeam and Shinkfield,
2007). Since evaluator sketches the picture of process evaluation with necessary and relevant
issues the report helps to interpret the potential outcomes whether these outcomes of the
program would be effective for the clients, to what extent the best services can produce for
the beneficiaries. Process emphasizes on implementation that requires pre, on-going
procedures for the program. For example, Ho et al. (2000) identified psycho-education
activities and suicide-prevention sheets, promotion of program website, monitoring suicide
data, 24-hour crisis line, gate-keeper training, follow-up visit, medical resources referral in
‘the Suicide Prevention Program in Kaohsiung City, Taiwan was evaluated using the CIPP
evaluation model’ which are related to process evaluation and intervention on on-going
program and potential outcomes. Vedung (1997) proposes a guideline to follow to check the
explanatory factors in process evaluation that consists of six issues such as historical
background of the intervention, intervention design, implementation, addressee response,
other government interventions, and issue networks and other environment (details in
appendix-1). These possible contingencies- explanatory factors, independent variables, and
determinants may interrupt policy, program, projects and their intervention outcomes.

For input evaluation evaluators are suggested to follow the following checklist (Stufflebeam,
2007):

Table  3:  Systematic  approach  of  process  evaluation  

What is to How to be done Who to be done


be done
Formation Engaging an evaluation team member to monitor, observe, Policy owner
of team maintain a photographic record of, and provide periodic
progress reports on program implementation.
Preparing Collaborating with the program’s staff, maintain a record Policy evaluator
programme of program events, problems, costs, and allocations.
Conducing Periodically interviewing beneficiaries, program leaders, Policy researcher,
interview and staff to obtain their assessments of the program’s Policy analyst
progress.
Keeping Maintaining an up-to-date profile of the program. Policy evaluator,

 
progress Periodically draft written reports on process evaluation Policy analyst
findings and provide the draft reports to the client and
agreed-upon stakeholders.
Discussing Presenting and discussing process evaluation findings in Policy researcher,
findings feedback workshops. Policy owner
Finalizing Finalizing each process evaluation report (possibly Policy evaluator,
report incorporated into a larger report) and associated visual Policy owner
aids and provide them to the client and agreed-upon
stakeholders.

Product Evaluation

The forth component of CIPP is product evaluation that attempts to measure, interpret and
judge an enterprise’s achievements where the main objectives are to evaluate to what extent
the needs of beneficiaries are met, what are the intended and unintended outcomes either
positive or negative. Outcomes may be short-term, intermediate or long-term basis where
evaluator emphasizes on third one.

The evaluators do it by congregating and analyzing stakeholders’ judgments of the program.


There are some ways to do it. Firstly, comparing the outcomes with a similar enterprise can
carry it out. Secondly, analyzing the clients’ view regarding the objectives achieved and
investment worthwhile from where evaluators can interpret easily that poor implementation
causes to poor outcomes. Thirdly, the aggregate viewpoints of groups or individuals can also
meet the objective of product evaluation process (Stufflebean and Shinkfield, 2007). In the
product evaluation it is an option to assess the performance, not only the regular performance
according to the objectives, it requires to evaluate beyond the objectives to examine the
unintended outcomes which might be either positive or negative. There are many ways to
assess these, such as group interviews, case studies based on in-depth interview conducted
among the experienced participants. Through the in-depth interview evaluators can
understand the program effects, views regarding the funding, resources utilized. The expertise
of evaluators may produce high valued result leading to the good report applicable for staff
and decision makers. Assessment of performance evaluation may include clear examples what
issues or services affect their work and well-being, distribution of jobs, new job status,
comparison of achivements. Stufflebeam advises to compare program achievements with a
comprehensive checklist of outcomes of similar programs or services that might help to
identify the gaps and performance of program. Product evaluation is suggested to carry out
with the purposively set goal that might help the evaluators to find the unintended
consequences. Because investigators can envisage focusing on set goals and can try to find
out the contrasting consequences. However, goal-free evaluation produces effects with the
program’s beneficiaries assessed needs (Scriven, 1991 in Stufflebeam and Shinkfield, 2007;
Vedung, 1997). This technique is considered as the unique approach to assess intervention’s
value. Since investigators are set within the framework what to achieve, this provides
unintended outcomes helpful for decision makers.

As product evaluation is done to help decide whether a program, project or service is worth
continuing, repeating or extending or modifying or replacing, reporting on this process is vital

 
which may be placed as interim to address the targeted needs or end-report to utilize for
modification or replicable for other similar projects. The follow-up report is also suggested to
submit to assess the long-term outcomes. The reports typically include the results of assessed
needs, cost-incurred, execution plan and analysis of results based on groups and individuals.
The detailed report will guide the decision makers for the next activities to be followed based
on various issues found in the process. This report will bring accountability as well at all
levels. However, it is warned that premature report might cause negative effect on on-going
project.

Evaluators can follow the following guidelines for product evaluation particularly for
evaluation of intended and intended consequences of outcomes i.e. impact (Stufflebeam,
2007):

Table  4:  Stages  of  product  evaluation  

Activity Step Actor


Formation Engage the program’s staff and consultants and/or an Policy owner
of team evaluation team member to maintain a directory of
persons and groups served, make notations on their
needs, and record program services they received.
Contextuali Assess and make a judgment of the extent to which Policy researcher
zation the served individuals and groups are consistent with
the program’s intended beneficiaries.
Conducting Periodically interview area stakeholders, such as Policy researcher,
interview community leaders, employers, school and social Data collector
programs personnel, clergy, police, judges, and
homeowners, to learn their perspectives on how the
program is influencing the community.

Include the obtained information and the evaluator’s


judgments in a periodically updated program profile.
Identificati Determine the extent to which the program reached Policy researcher,
on of target an appropriate group of beneficiaries. Data collector
group
Assess the extent to which the program
inappropriately provided services to a non-targeted
group.
Identifying Draft an impact evaluation report (possibly Policy researcher,
impact incorporated into a larger report) and provide it to the policy evaluator
client and agreed-upon stakeholders.
Feedback Discuss impact evaluation findings in a feedback Policy researcher,
workshop. Policy owner
Finalization Finalize the impact evaluation report and associated Policy owner
visual aids and provide them to the client and agreed-
upon stakeholders.

 
CIPP and Relevance to Formative and Summative Evaluation Roles

The CIPP model is developed based on learning by doing and is an ongoing process in order
to take corrective measures to develop the program. Four components of CIPP model have
direct relevancy with the formative and summative program evaluation which is presented in
tabular form below:

Table-2: CIPP and Relevance to Formative and Summative Evaluation Roles

Evaluation Context Input Process Product


Roles
Formative: Guidance for Guidance for Guidance for Guidance for
Prospective identifying choosing a implementing continuing,
application of needed program or other the operational modifying, adopting
CIPP information interventions and strategy based on plan based on or terminating the
to assist decision choosing and assessing monitoring and effort based on
making and ranking goals alternatives, judging assessing outcomes
quality assurance based on assessed strategies and program and side effects
needs, problems, resource activities
assets and allocation, work-
opportunities plan
Summative: Comparison of Comparison of Full Comparison of
Retrospective use goals and the program’s description of outcomes and side
of CIPP priorities to strategy, design, the actual effects to targeted
information to assessed needs, and budget to process and needs and as
sum up the problems, assets those of critical costs, and feasible to results of
program’s merit, and opportunities competitors and comparison of competitive
worth, probity the targeted the designed programs,
and significance needs of and actual interpretation of
beneficiaries processes and results against
costs efforts assessed
context, input and
processes

Source: Stufflebeam and Shinkfield, 2007

CIPP and Systematic Approach of Evaluation

Rossi and Freeman (1989) proposes the systematic approach of evaluation guiding to program
evaluation that starts from the defining the evaluation and program evaluation and explaining
diagnostic procedures of evaluation. Diagnostic procedure covers the need assessment and
potential targets and risks. The next step follows the tailoring the program that check on-
going projects in terms of accountability, adjusting program procedures to increase
effectiveness, program designing, comparison of goals and objectives. Then program

 
monitoring that tries to check whether program can reach to target group, whether the services
are consistent and the utility of resources. In this systematic evaluation approach, it comes the
impact assessment with it strategies to be followed, randomized and non-randomized
designed to check the effectiveness of the program. It also covers the efficiency of resources
following cost-benefit and cost-effectiveness analysis. Rossi and Freeman’s systematic
approach and Stufflebeam’s CIPP model for program evaluation are quite similar in the
program evaluation system. CIPP in the name of context, input, process and product analyzes
needs, problems, opportunities, work plans, targeted benefits, implementation plans as well as
impact in order to show the effectiveness of the program. In both cases plan and procedural
issues are more or less same.

Application of CIPP Model

Historically CIPP model was initiated to evaluate educational program. It is still popularly
used in this field to assess the education standard, curriculum development. However, this
model has become so popular and effective instrument that is frequently used in evaluate
programs, projects, services or other enterprises, for example, self-help housing project in the
USA, the Suicide Prevention Program in Kaohsiung City in Taiwan, Thai–Lao Collaborating
Nursing Manpower Development Project (Othaganont, 2001; Ho, et al., 2000; Stufflebeam
and Shinkfield, 2007).

Conclusion

The CIPP model is viewed as a systematic social approach to programme evaluation that
synthesizes all interrelated activities leading together to achieve defined goals in order to help
administration and management to assess the appropriateness of the programme, to find out
effective mechanisms to improve delivery to interventions, to meet the assessed needs or meet
the accountability requirements of funding groups (Rossi and Freeman, 1989; Stuffebeam and
Shinkfield, 2007; Vedung, 1997). It is a systematic evaluation model that analyzes context of
program i.e. programme’s vision, mission, objectives; input i.e. fund, staffing plans,
competing actions; process of the program i.e. implementation plan and intended outcomes in
order to assist the decision makers to decide about the requirements of the program which
ultimately brings the effectiveness of the intervention. However, evaluators should remember
all the possible weakness of methodology particularly pitfalls so that this might not affect the
evaluation report as researchers suggest that strong methodological integrity is critical to
measure the programme and outcomes (Hatry and Newcomer, 2004).

 
Reference:

Babbie, E. (2007). The Practice of Social Research. 11th ed. Belmont: Thomson Higher
Education.

Chacón-Moscoso, S., Anguera-Argilaga, M. T., Pérez-Gil, J. A., & Holgado-Tello, F. P.


(2002). A Mutual Catalytic Model of Formative Evaluation: The Interdependent Roles of
Evaluators and Local Programme Practitioners. Evaluation, 8(4), 413-432.
(http://evi.sagepub.com/content/8/4/413)

Chotchakornpant, K. (2013). Class Lecture on Program Evaluation. Delivered at the Graduate


School of Public Administration, National Insitutite of Development Administration,
Bangkok.

Eddy, R. M., & Berry, T. (2009). The Evaluator’s Role in Recommending Program Closure A
Model for Decision Making and Professional Responsibility. American Journal of
Evaluation, 30(3), 363-376.

Filella-Guin, G., & Blanch-Plana, A. (2002). Imprisonment and career development: An


evaluation of a guidance programme for job finding. Journal of Career Development, 29(1),
55-68.

Hatry, H.P. and Newcomer, K. E. (2004). Pitfalls of Evaluation in J. S. Wholey, H. P. Hatry


and K. E. Newcomer, Handbook of Practical Program Evaluation (2nd edn). San Francisco:
John Wiley & Sons, Inc.

Ho, W. W., Chen, W. J., Ho, C. K., Lee, M. B., Chen, C. C., & Chou, F. H. C. (2011).
Evaluation of the suicide prevention program in Kaohsiung city, Taiwan, using the CIPP
evaluation model. Community mental health journal, 47(5), 542-550.

Joint Committee on Standards for Educational Evaluation (1994). The program evaluation
standards. Thousand Oaks. CA: Corwin Press

Jones, B., Hopkins, G., Wherry, S.A., Lueck, C.J., Das, C.P. and Dugdale, P., 2016.
Evaluation of a regional Australian nurse-led parkinson's service using the context, input,
process, and product evaluation model. Clinical Nurse Specialist, 30(5), pp.264-270.

Karatas, H. and Fer, S., 2009. Evaluation of English curriculum at Yildiz Technical
University using CIPP model. Egitim ve Bilim, 34(153), p.47.

Karim, M. R. 2014, Stufflebeam’s Content, Input, Process and Product Model: An Analysis
and Interpretation in Public Policy Making, Unpublished document submitted to the National
Institute of Development Administration, Thailand.

Khalid, M., Ashraf, M. and Rehman, C.A., 2012. Exploring the link between Kirkpatrick
(KP) and context, input, process and product (CIPP) training evaluation models, and its effect

 
on training evaluation in public organizations of Pakistan. African Journal of Business
Management, 6(1), pp.274-279.

Newcomer, K.E. (2015), Carol H. Weiss, Evaluation Research: Methods for studying
Programs and Policies, in SJ Balla, M Lodge & EC Page (eds), The Oxford Handbook of
Classics in Public Policy and Administration, Oxford, Oxford University Press.

Othaganont, P. (2001). Evaluation of the Thai–Lao Collaborating Nursing Manpower


Development Project using the Context Input Process Product model.Nursing & Health
Sciences, 3(2), 63-68.

Rossi, P. H. & Freeman, H. E. (1989). Evaluation: A systematic Approach. Sage.

Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic Approach.
Sage.

Schulberg, H. C., & Baker, F. (1968). Program evaluation models and the implementation of
research findings. American Journal of Public Health and the Nations Health, 58(7), 1248-
1255. (available at http://ajph.aphapublications.org/doi/pdf/10.2105/AJPH.58.7.1248)

Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of Program Evaluation:
Theories of Practice. Sage.

Stufflebeam, D. L. (1971). The Relevance of the CIPP Evaluation Model for Educational
Accountability. Paper presented at the Annual meeting of the American Association of School
Administrators (Atlantic City, N.J., February 24, 1971) (available at
http://eric.ed.gov/?id=ED062385)

Stufflebeam, D. L. (1983). The CIPP model for program evaluation. In Evaluation


models (pp. 117-141). Springer Netherlands. Available at
http://files.eric.ed.gov/fulltext/ED048333.pdf

Stufflebeam, D. L. (2003). The CIPP model for evaluation. In International handbook of


educational evaluation (pp. 31-62). Springer Netherlands. (available at
http://files.eric.ed.gov/fulltext/ED062385.pdf)

Stufflebeam, D. L. (2007). CIPP Evaluation Model Checklist. Western Michigan University.


The Evaluation Centre. Retrieved June, 2, 2009. (available at http://pep.pps.unnes.ac.id/wp-
content/uploads/2013/01/CIPP-CHECKLIST.pdf)

Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation Theory, Models, and Applications.
San Francisco: John Wiley & Sons, Inc.

Stufflebeam, D.L., 2000. The CIPP model for evaluation. In Evaluation models (pp. 279-317).
Springer, Dordrecht.

 
Stufflebeam, D. L., Madaus, G. F., & Kellaghan, T. (Eds.). (2000). Evaluation models:
Viewpoints on educational and human services evaluation (Vol. 49). Springer.

Tokmak, H.S., Baturay, H.M. and Fadde, P., 2013. Applying the context, input, process,
product evaluation model for evaluation, research, and redesign of an online master’s
program. The International Review of Research in Open and Distributed Learning, 14(3),
pp.273-293.

Trochim, W. M. (1989). An introduction to concept mapping for planning and


evaluation. Evaluation and program planning, 12(1), 1-16.

Tseng, K.H., Diez, C.R., Lou, S.J., Tsai, H.L. and Tsai, T.S., 2010. Using the Context, Input,
Process and Product model to assess an engineering curriculum. World Transactions on
Engineering and Technology Education, 8(3), pp.256-261.

Vedung, E. (1997). Public Policy and Program Evaluation. New Brunswick (U.S.A.) and
London (UK) Transaction Books.

Wholey, J.S.; Hatry, H.P. and Newcomer, K. E. (2004). Handbook of Practical Program
Evaluation (2nd edn). San Francisco: John Wiley & Sons, Inc.

Worthen, B. R., Sanders, J. R., & Fitzpatrick, J. L. (1997). Program Evaluation: Alternative
Approaches and Practical Guideline, New York: Longman. Available at
https://web.utk.edu/~cdavis80/EP521/readings/Worthen1.pdf

Zhang, G., Zeller, N., Griffith, R., Metcalf, D., Williams, J., Shea, C., & Misulis, K. (2011).
Using the Context, Input, Process, and Product Evaluation Model (CIPP) as a comprehensive
framework to guide the planning, implementation, and assessment of service-learning
programs. Journal of Higher Education Outreach and Engagement, 15(4), 57-84.

Appendix-1: Explanatory Factors in Process Evaluation

A. Historical background of the intervention


1. Direction of proposed change
2. Political support
3. Size of proposed change
4. Level of attention
5. Symbolic politics
6. Participation of affected interests

B. Intervention design
1. Clarity (linguistic obscurity, several options for action)
2. Technical complexity
3. Validity of intervention theory

C. Implementation

 
1. National agencies: comprehension, capability, willingness (public choice,
mismatch, capture)
2. Formal intermediates
3. Street-level bureaucracy: coping strategies, capture, mismatch
4. Addressee participation

D. Addressee response
1. Comprehension, capability, willingness
2. Formative moments
3. Zealots
4. Camouflage
5. Resistance, free riders

E. Other government interventions, other government agencies

F. Issue networks and other environment

1. Support of sovereigns after formal instigation of mandate


2. Support of other actors external to formal administration
3. Mass media
4. Changes in the target area

Source: Vedung, E. (1997). Public Policy and Program Evaluation. New Brunswick
(U.S.A.) and London (UK)Transaction Books., pp212

View publication stats

You might also like