Project Monitoring and Evaluation - MAPM 709
Project Monitoring and Evaluation - MAPM 709
School of Commerce
Supported Distance Education Program
Monitoring and evaluation can help organization extract relevant information from past and
ongoing activities that can be used as the basis for programmatic improvement,
reorientation and future planning. Without effective planning, monitoring and evaluation, it
would be impossible to judge if work is going in the right direction, whether progress and
success can be claimed, and how future efforts might be improved. The purpose of this
course is to promote a common understanding and reliable practice of monitoring and
evaluation (M&E) for a project/program. It familiarizes learners with various project
monitoring and evaluation systems and tools that focus on results in international
development. The course also offers learners both a conceptual framework and practical skill
development. The module covers the following topics: Introduction to Monitoring and
Evaluation; Frameworks and Indicators for M&E; Conducting Baselines and Collecting Data;
Evaluation and Impact Assessment and finally The Project Cycle of M&E.
Dear learner! Please read carefully the learning objectives of each unit before you proceed
studying the content. When you read each unit, you should give focus for each topic and
don’t proceed to next topic before you fully aware of all points discussed. It can enhance
your understanding if you underline or highlight points that are important.
As a distance learner, you will be evaluated through Tutor Marked Assignments (TMAs) and
Examination. TMAs are provided along with the Module. They should be sent to the tutor on
time to be marked and included in your final grade.
Indeed, this Module is complete by itself to understand the course “Project Monitoring and
Evaluation”. However, for further reading and understanding, it is advisable to use the text
and reference materials listed at the end of the Module.
Course Objectives:
After successfully completing this course, you will be able to:
Unit 1
Introduction to Monitoring and Evaluation
Introduction
Hello dear learner! This is the first unit of the module titled ‘Introduction to Monitoring and
Evaluation’. Monitoring and Evaluation is a systematic collection and analysis of information
to enable managers and key stakeholders to make informed decisions, uphold existing
Learning Objectives:
Good RBM is an ongoing process. This means that there is constant feedback, learning and
improving. Existing plans are regularly modified based on the lessons learned through
monitoring and evaluation, and future plans are developed based on these lessons.
Monitoring is also an ongoing process. The lessons from monitoring are discussed
periodically and used to inform actions and decisions. Evaluations should be done for
programmatic improvements while the programme is still ongoing and also inform the
RBM is concerned with learning, risk management and accountability. Learning not only
helps improve results from existing programmes and projects, but also enhances the capacity
of the organization and individuals to make better decisions in the future and improves the
formulation of future programmes and projects. Since there are no perfect plans, it is
essential that managers, staff and stakeholders learn from the successes and failures of each
programme or project.
There are many risks and opportunities involved in pursuing development results. RBM
systems and tools should help promote awareness of these risks and opportunities, and
provide managers, staff, stakeholders and partners with the tools to mitigate risks or pursue
opportunities.
RBM practices and systems are most effective when they are accompanied by clear
accountability arrangements and appropriate incentives that promote desired behaviour. In
other words, RBM should not be seen simply in terms of developing systems and tools to
plan, monitor and evaluate results. It must also include effective measures for promoting a
culture of results orientation and ensuring that persons are accountable for both the results
achieved and their actions and behavior.
The main objectives of good planning, monitoring and evaluation—that is, RBM— are to:
Support substantive accountability to governments, organizations, beneficiaries,
donors, other partners and stakeholders;
Prompt corrective action;
Ensure informed decision making;
Promote risk management;
Enhance organizational and individual learning.
Fig.1.2 summarizes key monitoring questions as they relate to the log frame’s objectives.
Note that they focus more on the lower-level objectives – inputs, activities and (to a certain
extent) outcomes. This is because the outcomes and goal are usually more challenging
changes (typically in knowledge, attitudes and practice/behaviors) to measure, and require a
longer time frame and a more focused assessment provided by evaluations.
As we will discuss later, there are various processes and tools to assist with the different
types of monitoring, which generally involve obtaining, analyzing and reporting on
monitoring data. Specific processes and tools may vary according to monitoring need, but
there are some overall best practices, which are summarized in the following box.
Monitoring best practices
Monitoring data should be well-focused to specific audiences and uses (only what is
necessary and sufficient).
Monitoring should be systematic, based upon predetermined indicators and
assumptions.
Monitoring should also look for unanticipated changes with the project/ programme
and its context, including any changes in project/programme assumptions/risks; this
information should be used to adjust project/programme implementation plans.
Monitoring needs to be timely, so information can be readily used to inform
project/programme implementation.
Whenever possible, monitoring should be participatory, involving key stakeholders –
this can not only reduce costs but can build understanding and ownership.
Monitoring information is not only for project/programme management but should be
shared when possible with beneficiaries, donors and any other relevant stakeholders.
Fig 1.3 summarizes key evaluation questions as they relate to the logframe’s objectives, which
tend to focus more on how things have been performed and what difference has been made.
It is best to involve key stakeholders as much as possible in the evaluation process. This
includes National Society staff and volunteers, community members, local authorities,
partners, donors, etc. Participation helps to ensure different perspectives are taken into
account, and it reinforces learning from and ownership of the evaluation findings.
There is a range of evaluation types, which can be categorized in a variety of ways. Ultimately,
the approach and method used in an evaluation is determined by the audience and purpose
of the evaluation. Table 2 summarizes key evaluation types according to three general
Midterm evaluations are formative in purpose and occur midway through implementation.
For secretariat-funded projects/ programmes that run for longer than 24 months, some
type of midterm assessment, evaluation or review is required. Typically, this does not need
to be independent or external, but may be according to specific assessment needs.
Final evaluations are summative in purpose and are conducted (often externally) at the
completion of project/programme implementation to assess how well the project/
programme achieved its intended objectives. All secretariat-funded projects/programmes
should have some form of final assessment, whether it is internal or external.
Ex-post evaluations are conducted some time after implementation to assess long-term
impact and sustainability.
Approach 2: Who conducts the evaluation
Internal or self-evaluations are conducted by those responsible for implementing a
project/programme. They can be less expensive than external evaluations and help build
staff capacity and ownership. However, they may lack credibility with certain stakeholders,
such as donors, as they are perceived as more subjective (biased or one-sided). These tend
to be focused on learning lessons rather than demonstrating accountability.
Participatory evaluations are conducted with the beneficiaries and other key stakeholders,
and can be empowering, building their capacity, ownership and support.
Joint evaluations are conducted collaboratively by more than one implementing partner,
and can help build consensus at different levels, credibility and joint support
Approach 3: Evaluation Technicality or Methodology
Real-time evaluations (RTEs) are undertaken during project/programme implementation to
provide immediate feedback for modifications to improve ongoing implementation.
Emphasis is on immediate lesson learning over impact evaluation or accountability. RTEs
are particularly useful during emergency operations, and are required in the first three
months of secretariat emergency operations that meet any of the following criteria: more
than nine months in length; plan to reach 100,000 people or more; the emergency appeal is
greater than 10,000,000 Swiss francs; more than ten National Societies are operational
with staff in the field.
Meta-evaluations are used to assess the evaluation process itself. Some key uses of meta-
evaluations include: take inventory of evaluations to inform the selection of future
evaluations; combine evaluation results; check compliance with evaluation policy and good
practices; assess how well evaluations are disseminated and utilized for organizational
learning and change, etc.
Thematic evaluations focus on one theme, such as gender or environment, typically across
a number of projects, programmes or the whole organization.
Impact evaluations focus on the effect of a project/ programme, rather than on its
management and delivery. Therefore, they typically occur after project/programme
completion during a final evaluation or an ex-post evaluation. However, impact may be
measured during project/ programme implementation during longer projects/ programmes
and when feasible.
Activity 1
Answer the following questions.
Table 1.3 provides some key terms and the generally accepted definitions. It is how the terms
will be used throughout the material. These, combined with the additional information
should provide a good working knowledge of current practice for M&E
Monitoring gives information on where a policy, program or project is at any given time (or
over time) relative to respective targets and outcomes. Monitoring focuses in particular on
efficiency, and the use of resources. While monitoring provides records of activities and
results, and signals problems to be remedied along the way, it is descriptive and may not be
able to explain why a particular problem has arisen, or why a particular outcome has occurred
or failed to occur.
Impact Assessment is an aspect of evaluation that focuses on ultimate benefits. It sets out to
assess what has happened as a result of the intervention and what may have happened
without it. Where possible impact assessment tries to differentiate between changes that
can be attributed to the program from other external factors that may have contributed as
well as examining unintended changes alongside those intended.
Until recently, M&E has primarily met donor needs for proving or legitimizing the purpose of
the program by demonstrating the effective use of resources. The LEGITIMIZATION function
demonstrates whether reforms are having the desired effect in order to be accountable to
clients, beneficiaries, development partners and taxpayers for the use of resources. M&E as a
legitimization function – PROVING.
There is a growing awareness of the need for practitioners to conduct their own evaluation
activities in order to increase understanding of development results, which in turn lead to
increased learning and improving within their organization. This LEARNING function
enhances organizational and development learning to increase the understanding of why
particular interventions have been more or less successful.
In addition to the benefits gained from undertaking M&E, there are other benefits to be
derived from the way in which M&E activities are undertaken. Using a strong participatory
approach to M&E, with the active engagement of government officials, helps to build,
strengthen and embed local M&E capability and oversight processes. This helps to build a
credible ongoing evaluation capacity in country.
There is no one generic project/programme cycle and associated M&E activities. This figure is
only a representation meant to convey the relationships of generic M&E activities within a
project/programme cycle.
The listed PMER activities will be discussed in more detail later in this module. For now, the
following provides a brief summary of the PMER activities.
Baseline and end line studies are not evaluations themselves, but an important part of
assessing change. They usually contribute to project/programme evaluation (e.g. a final or
impact evaluation), but can also contribute to monitoring changes on longer-term
projects/programmes. The benchmark data from a baseline is used for comparison later in
the project/programme and/or at its end (end line study) to help determine what difference
Recognizing their differences, it is also important to remember that both monitoring and
evaluation are integrally linked; monitoring typically provides data for evaluation, and
elements of evaluation (assessment) occur when monitoring. For example, monitoring may
tell us that 100 community facilitators were trained (what happened), but it may also include
post-training tests (assessments) on how well they were trained. Evaluation may use this
monitoring information to assess any difference the training made towards the overall
objective or change the training was trying to produce.
A review is a structured opportunity for reflection to identify key issues and concerns, and
make informed decisions for effective project/programme implementation. While monitoring
is ongoing, reviews are less frequent but not as involved as evaluations. They are useful to
share information and collectively involve stakeholders in decision-making. They may be
conducted at different levels within the project/programme structure (e.g. at the community
level and at headquarters) and at different times and frequencies. Reviews can also be
conducted across projects or sectors. It is best to plan and structure regular reviews
throughout the project/programme implementation.
Table 1.5: The key differences among monitoring, evaluations and audits
Particulars Monitoring & Reviews Evaluations Audits
Why? Check progress, Assess progress and Ensure compliance
inform decisions and worth, identify lessons and provide
remedial action, update and recommendations assurance and
project plans, support for longer-term accountability
accountability planning and
organizational learning;
provide accountability
When? Ongoing during project/ Periodic and after According to (donor)
programme project/ programme requirement
Who? Internal, involving project/ Can be internal or Typically external to
programme implementers external to organization project/programme,
but internal or
external to
organization
Link to Focus on inputs, activities, Focus on outcomes and Focus on inputs,
logical outputs and shorter-term overall goal activities and outputs
hierarchy outcomes
International standards and best practices help to protect stakeholders and to ensure that
M&E is accountable to and credible with them. The following is a list of key standards and
practices for ethical and accountable M&E:
Minimizing bias helps to increase accuracy and precision. Accuracy means that the data
measures what it is intended to measure. For example, if you are trying to measure
Similarly, precision means that data measurement can be repeated accurately and consistently
over time and by different people. For instance, if we use a survey to measures people’s
attitudes for a baseline study, two years later the same survey should be administered during
an end line study in the same way for precision.
As much as we would like to eliminate bias and error in our measurements and information
reporting, no research is completely without bias. Nevertheless, there are precautions that
can be taken, and the first is to be familiar with the major types of bias we encounter in our
work:
a. Selection bias results from poor selection of the sample population to measure/ study.
Also called design bias or sample error, it occurs when the people, place or time period
measured is not representative of the larger population or condition being studied. It is a
very important concept to understand because there is a tendency to study the most
successful and/or convenient sites or populations to reach (which are often the same).
For example, if data collection is done during a convenient time of the day, during the
dry season or targets communities easily accessible near paved roads, it may not
accurately represent the conditions being studied for the whole population.
b. Measurement bias results from poor data measurement – either due to a fault in the
data measurement instrument or the data collector. Sometimes the direct measurement
may be done incorrectly, or the attitudes of the interviewer may influence how questions
are asked and responses are recorded. For instance, household occupancy in a disaster
response operation may be calculated incorrectly, or survey questions may be written in
a way that biases the response, e.g. “Why do you like this project?” (Rather than “What
do you think of this project?”).
c. Processing error results from the poor management of data – miscoded data, incorrect
data entry, incorrect computer programming and inadequate checking. This source of
It is difficult to fully cover the topic of bias and error and how to minimize them. However,
many of the precautions for bias and error are topics in the next section of this module. For
instance, triangulating (combining) sources and methods in data collection can help reduce
error due to selection and measurement bias. Data management systems can be designed to
verify data accuracy and completeness, such as cross-checking figures with other data
sources or computer double-entry and post-data entry verification when possible. A
participatory approach to data analysis can help to include different perspectives and reduce
analytical bias. Also, stakeholders should have the opportunity to review data products for
accuracy.
To that end, an important principle is to ensure that an M&E is considered alongside program
design and assessment and that an M&E system and plan is put in place which clearly
articulates how evaluation will occur throughout the project management cycle. This material
offers strategies and tactics for practical implementation of effective M&E activities that help
address the challenges.
Activity 2
Answer the following questions.
1. Monitoring and evaluation are closely related to planning. In particular, planning needs to
ensure that planned initiatives are evaluation-ready. Explain the main benefits that make
planning worthwhile.
2. Discuss practical guidance on how these norms and principles can be applied throughout
the evaluation process in order to meet the required quality standards and its intended
role.
Building and sustaining an M&E system is not easy – it requires commitment, time,
continuous effort, resources and ideally a champion to promote and prioritize the
importance of M&E. But it is possible and there is evidence from current practice that
efficient and effective M&E can be undertaken.
There are a number of key terms to understand and be able to use for M&E work.
Familiarization with the concepts, the strengths and weaknesses takes time, but is a
worthwhile investment;
M&E are distinct yet interdependent entities that tell us if we are on the right track,
doing the right things, for the right groups of people in the best way possible;
Once an M&E system is in place the challenge is to sustain it. In this respect M&E
systems are a continuous work in progress;
There are challenges to designing and implementing effective M&E but current practice
provides strategies and tactics for addressing those challenges.
Dear learner, if you understood this unit very well, attempt the following questions and
evaluate yourself against the answers given at the end of this unit.
Case Analysis
1. Good planning combined with effective monitoring and evaluation can play a major role
in enhancing the effectiveness of development programmes and projects. Explain the
inter-linkages and dependencies between planning, monitoring and evaluation.
2. Like monitoring and evaluation, inspection, audit, review and research functions are
oversight activities, but they each have a distinct focus and role and should not be
confused with monitoring and evaluation. Differentiate them.
Activities
Activity 1
Activity 2
Introduction
Hello dear learner! This is the second unit of the module titled ‘Frameworks and Indicators
for Monitoring and Evaluation’. Successful projects are usually well designed, focused on
their purpose with clearly articulated aims, objectives and actions. The same is true for the
successful assessment of programs and projects. It is important to have a clear framework
and plan of action for Monitoring and Evaluation activities that is incorporated into the
overall project plans. This unit looks at how Monitoring and Evaluation can be effectively
integrated into project planning through the use of tried and tested approaches and the
development of key indicators.
Learning Objectives:
1. Identify the frameworks and systems for the planning and management of projects;
2. Differentiate between the logical framework approach (LFA) and the associated Log
Frame (LF);
3. Describe the basic concept behind Results-oriented approaches;
4. Depict how the logic models and frameworks improve the quality of Monitoring and
Evaluation processes;
5. List out the main types of indicators and targets that are used in evaluation work;
6. Analyze the use of comparable and core indicators.
The Log Frame helps to clarify the objectives of any project, program, or policy and improve
the quality of M&E design. It aids in the identification of the expected causal links – the
‘program logic’ - in the following results chain: inputs, processes, outputs, outcomes, and
impact. It leads to the identification of performance indicators at each stage in this chain,
looks at the evidence needed to verify these indicators as well as the assumptions that
underlie them and the risks which might impede the attainment of results.
The Logical Framework (Rosenberg & Posner, 1979) was developed for the United States
Agency for International Development as a tool to help conceptualize a project and analyze
the assumptions behind it. Since the development of the Logical Framework, it has been
adopted, with various adaptations (GTZ, 1983), by a large number of bilateral and
international development organizations. The Logical Framework has proven extremely
valuable for project design, implementation, monitoring, and evaluation.
The Log Frame is so named because of the logic processes that underpin its creation and
format. This logic is explained and demonstrated through something called the program logic
model. Logic model may also be called Theory of change, Program action, Model of change,
Conceptual map, Outcome map, Program logic. This is a way of thinking about how the
various components of a project relate to each other to achieve impact and meet goals. The
model is illustrated in Figure 2.1. This shows that specified inputs are used in a project to
produce or undertake a series of activities which in turn deliver things such as advisory
services, training, and public awareness campaigns as part of programs and projects.
As Figure 2.3 shows, monitoring work focuses on the progress and tracking of inputs,
implementation of activities and production of outputs where as Evaluation tends to take
place at specific points/stages in a project and permits an assessment of progress over a
longer period of time. The focus is on tracking changes in relation to outcomes (with
reference to objectives) and impact, in terms of the project goals. Also the LF, when
presented in a table-like matrix format can be a useful way of capturing both the content of a
project together with the key components of the M&E plan.
Table 2.1 summarizes a project and its key M&E feature in a systematic way showing:
what a project intends to achieve;
what it intends to do to achieve this and how;
what the key assumptions are in doing this; and
how the inputs activities, outputs, outcomes and impact will be monitored and
evaluated.
How will the logic models and frameworks improve the quality of M&E processes?
Using a tried and tested form of LF will not only encourage a clarity of purpose and practice
for project implementation but will also provide the same for the nature and form of project
M&E to be undertaken. Training is often required to promote the effective use of LFs.
However, if used appropriately they provide an opportunity and vehicle for engaging a range
of partners and other stakeholders in a participatory approach to M&E and communicating
intent to a wider audience. There are strengths and weaknesses in any approach. Table 2.2
summarizes those associated with Log Frames.
However the approach highlights two aspects of M&E activity that are different to standard
LFs:
a) The focus on measuring ‘results’ throughout a project which are described and linked by
a causal impact chain; and
b) How impact is measured and attributed throughout the impact chain.
This results-based impact chain model can also be translated into a matrix similar to the Log
Frame, for project planning and management as is illustrated below.
The fundamental challenge for the program manager is to develop appropriate performance
indicators which measure project performance. These indicators measure the things that
projects do, what they produce, the changes they bring about and what happens as a result
of these changes.
In order to choose indicators, decisions must be made about what to measure. Having the
right indicators underpins effective project implementation and good M&E practice.
What is an indicator?
To measure something it is important to have a unit or variable ‘in which’ or ‘by which’ a
measurement is made i.e. an indicator. For example, in BEE work if the aim is to make
registering a business easier, then changes in the time taken and the costs of registering are
useful indicators of whether and how the intervention has made a difference.
Process Indicators:
M&E is inevitably focused on results and so what has been achieved tends to be the priority.
However, the process of how results are achieved is often as equally important as the results
themselves. For example, measuring the changes in attitudes and commitment of the front
line officers when reforming business registration procedures may give insight into why the
businesses are still reluctant to register despite decreasing the time and cost of doing so.
Process-related aspects in evaluation can be more difficult to measure as it is harder to
predict when they will occur and who will be involved. Also processes can be experienced
Cross-cutting indicators:
The activities of and results arising from development interventions can be experienced and
perceived differently by different stakeholders. Successful M&E take this into account.
Indicators must adequately reflect and capture the views and experiences of different
stakeholders. In considering indicators for different stakeholders, it is important that to
consider and include those who may lose out as a result of the interventions well as those
that will benefit.
Sometimes there is insufficient data to develop targets at the early stages of a project and it
would be a fundamental mistake to do so and make up unrealistic targets. Therefore it is
entirely acceptable to present indicators without targets in an early LF. The important thing is
that the LF includes indicators that measure the elements of change that are likely to happen.
Once approval has been given and the intervention is underway indicators can be checked
with partners and stakeholders and targets can be constructed and agreed.
Others believe that more appropriate indicators are developed through a participative
process of development with intervention partners and stakeholders. This is likely to achieve
greater ownership of the results f the intervention. The insight of a local view may bring the
added benefits of a greater commitment to collecting the required data, understanding of
the importance of accuracy and timely collection and help to build local evaluation capability
and capacity as noted in previous section.
Ideally, both views can be incorporated. One way of achieving this is to have a set of core or
common or comparable indicators that have been developed by the experts to allow for
cross project and or country comparison and then customized indicators developed through
local participative processes of analysis and design. The definitions of each of those
indicators are given below..
Outputs are closely related to project deliverables. They include recommendations and
amendments to laws and regulations, trainings, and consultations which can be counted.
In the longer term, outcomes can be viewed from both the government (public welfare) and
the enterprise perspective and are typically seen in terms of reduced steps, time and cost of
gaining the registration, license or permit, or complying with the regulatory procedures. They
can also capture reduced risk through the reduction in delays and reduction in corruption.
This in turn leads to quicker and cheaper registration and increased levels of compliance with
regulatory systems.
The impact of business regulation reforms is the contribution to economic growth in the
formal economy via the improved business enabling environment. Indicators include the
aggregate cost saving enjoyed by businesses through the improved regulatory environment,
productive private sector investments (i.e., foreign direct investments, gross fixed capital
formation) and the number of formal enterprises and jobs (formalization).
In addition to these core indicators, there are additional indicators that might be relevant to
specific types of programs and especially relevant at the outcome and impact levels. For
instance, different industries are usually regulated in different ways. For example, the
chemical industry will involve different legislation and regulations than say those in the
garment sector. Hence industry-specific reforms may include a suite of regulatory reforms in
reference to a particular industry/sector. Additional indicators will need to capture the
outcomes and impact on the industry itself and associated increases in productivity, growth
(for example via exports) and investment.
Using core indicators has distinct advantages. They provide an objective and comparable
basis for assessing performance and therefore provide a solid foundation for management
decisions. The comparable dimensions mean that core indicators can be used for
benchmarking and facilitating learning within the donor institution and external
stakeholders.
However, there are also challenges and limitations to using core indicators. One of the main
arguments is ‘our situation is different’ and that core indicators do not address country-
specific objectives. They are seen as a very ‘top down approach’ imposed on field offices and
projects and do not promote local stakeholder ownership in projects or their evaluation.
A major issue for some programs is that core indicators, especially for outputs and outcomes,
typically use counting techniques. For example, an outcome for a business regulatory reform
program is the number of revised laws passed. An issue arises when this type of indicator is
In one country a major piece of law may need adjustment to reduce cost and time in business
licensing procedures. This could be counted as ‘1’ as an output indicator. In a neighboring
country, the legal framework for business regulations could look quite different, and the
reform intervention in this case has required multiple small legislative changes. But, does this
then compare like with like? What is the magnitude, or ‘quality’ of the indicator? In this
respect, core indicators will only tell some of the story. They are important for developing
benchmarks and for donor oversight of reform interventions. However, they must be
contextualized and complemented by additional customized (or bespoke) indicators and
other monitoring information. This will be discussed in more depth in the forthcoming
sections.
DB indicators are an extremely important, useful and powerful indicator. However, both their
strengths and limitations must be understood in order for them to be used most
appropriately and to effectively add value to M&E work. Ideally the DB indicators should be
triangulated with primary data and also qualitative indicators and methods to capture
perceptions and experiences of diverse stakeholders as well as the procedures associated
with reforms.
Activity 2
Answer the following questions.
1. What is the significance of Target setting as part of M&E planning? Do targets change?
Explain it.
2. What are important considerations for a monitoring and evaluation plan?
The building blocks of a fit-for-purpose M&E for a project consist of a series of logical
steps to demonstrate that the proposed or enacted reform has a means of measurement
known as indicators that are integrated into the planning and management cycle.
Clarity regarding the purpose and use of an indicator will contribute to the potential for
benchmarking, comparison and cross-checking (or triangulation) of processes and results.
The Logic model and its associated frameworks is a tried and tested mechanism for
thinking through and presenting an overview of a project and the attendant M&E and IA
process, activities and timescale.
Indicators are a critical component of effective M&E.
Indicators are required for each aspect (monitoring, evaluation and impact) and at all
levels of a project (inputs, outputs, outcomes and impact)
There are several types of indicators - quantitative and qualitative, direct and indirect,
activity and process and representing the diversity of stakeholders – it is likely that a mix
will be required
Measuring change is costly. However, it is still necessary to ensure that there are
sufficient and relevant indicators to measure the breadth of change and to provide cross-
checking or triangulation.
The creation of universal impact indicators is being explored with concepts such as
private sector savings and aggregate cost savings.
There is a wealth of resources (in print and on-line) to help develop skills and insight. Key
texts and references are listed in the Handbook.
Dear learner, if you understood this unit very well, attempt the following questions and
evaluate yourself against the answers given at the end of this unit.
1. Consider any project which may be hypothetical or real and demonstrate by using a
logical framework model.
2. Briefly present the elements and explanation of how the information presented in
Project Profile.
3. Consider any project which may be hypothetical or real and develop alternative Project
profiles.
4. In M&E planning, one of the things that managers have to work out is a set of
indicators. Understandably, questions often arise regarding what indicators are, their
importance and what to consider when choosing them. Explain about indicators, their
types, their importance and eventually, how to select appropriate indicators.
Activities
Activity 1
Activity 2
5. Indicators
What is an indicator?
An indicator is a variable that is normally used as a benchmark for measuring program
or project outputs. It is “that thing” that shows that an undertaking has had the
desired impact. It is on the basis of indicators that evidence can be built on the impact
of any undertaking. Most often, indicators are quantitative in nature, however, in some
few cases, they are qualitative.
Most often indicators are confused with other project elements such as objectives or
targets. Indeed, understandably so. Unlike targets or results which specify the level of
1. At the initial phase of a project, indicators are important for the purposes of
defining how the intervention will be measured. Through the indicators,
managers are able to pre-determine how effectiveness will be evaluated in a
precise and clear manner.
2. During project implementation, indicators serve the purpose of aiding program
managers assess project progress and highlight areas for possible improvement.
In this case, when the indicators are measured against project goals, managers
can be able to measure progress towards goals and inform the need for
corrective measures against potential catastrophes.
3. At the evaluation phase, indicators provide the basis for which the evaluators will
assess the project impact. Without the indicators, evaluation becomes an
audacious responsibility.
Types of indicators
The three widely acknowledged types of indicators are process indicators, outcome
indicators and impact indicators.
1. Process indicators: are those indicators that are used to measure project
processes or activities. For example, in a Safe Water project, this could be “the
number of chlorine dispensers installed at water points” or “the number of
households that have received training on chlorination of water.”
2. Outcome Indicators: Are indicators that measure project outcomes. Outcomes
are medium impacts of a project. For example, in the case of a Safe Water project,
outcome indicators could be “the proportion of households using chlorinated
drinking water” or “the percentage of children suffering from diarrhea.”
3. Impact Indicators: Are indicators that measure the long term impacts of a project,
also known as the project impact. In the case of the Safe Water project, it could
be, “the prevalence of under 5 mortality.”
Examples of Indicators
The following are some indicators for a climate change adaptation project in community
level which focuses on farmers.
Process indicators
1. No of farmers supplied with drought resistant crops
2. No of community awareness meetings conducted
3. No of wells/dams constructed
4. No of farmers enrolled in crop insurance
5. No. of irrigation systems constructed
Outcome indicators
1. Proportion of food secure households
2. Percentage of malnourished children under-5
Impact indicators
1. Employment rates of the region
2. prevalence of under 5 mortality
Hello dear learner! This is the third unit of the module titled ‘Baselines and Data for
Monitoring and Evaluation’. Effective monitoring and evaluation requires the collection of
baseline data for selected Indicators. These should be updated as the project progresses. The
major challenge is the different types of activity that typically make up a project is usually
coupled with the variability, limited availability and poor quality of available data.
The process of collecting primary data on a routine basis and upgrading the quality of
existing data is often constrained by the costs of both time and finances. Data collection and
analysis require substantial financial resources, technical skills and time, all of which are
typically in short supply in many situations. There is a need to carefully manage which
indicators are measured, the type of data required to assess progress, the availability of this
data, how it will be collected, the frequency and format of monitoring activities (collection,
reporting, workshops, reviews, meetings) and who participates. This unit will look at the
ways of establishing baselines, doing surveys, sourcing and collecting data.
Learning Objectives:
For example, in a project that aims to reform the regulatory procedures for import and
export, an initial assessment of the current procedures and processes must be completed.
This is also the case for business registration, local level licensing, sectoral licensing,
inspections or tax regime reform. There may be a variety of perspectives on what the
situation is and what changes need to happen. A second measurement should occur when
results can or should be expected (e.g after 6 months) following the implementation of the
streamlined process. This measurement is intended to determine whether the changes made
have actually resulted in improvements.
It is worth noting that many performance indicators may display a “J-curve” effect (showing
a decrease prior to an increase) where for example the number of companies registered
initially decreases (because of the weeding out of “dead” companies) or financial
performance deteriorates before improving. Careful tracking of indicators from the early
stages of the reform intervention will allow the capture of the real baseline data. Project
teams will therefore need to ensure that performance is measured from the very inception of
the reform initiative to guarantee that performance targets are met. In order to determine
whether a reform process has been successful, it is necessary to conduct an evaluation by
essentially taking a ‘before’ and ‘after’ snapshot of performance. This aspect of evaluation is
discussed in more detail will be in the forthcoming Section.
As discussed in previous section, it is vital to include data on both quantitative and qualitative
indicators aiming to capture the starting points on facts, processes and attitudes. In this
section, we explore the use of a range of primary data collection methods including focus
groups, surveys and one-to-one interviews. It is recognized that comprehensive enterprise
Regulatory baselines
A regulatory baseline, or regulatory mapping exercise, collects data on the current regulatory
system (which could be for registration, licensing, inspections, taxation, or any other
business regulation). This type of baseline is similar to what is captured in the Doing Business
surveys. However, it will typically not capture the level of detail required by a program team,
especially if the program is focused at the sub-national level, at sector or industry level, or
from the perspective of MSMEs. A thorough regulatory baseline should therefore map out
the regulatory procedure in detail. This will provide the starting point for a rigorous ‘Before
and After’ assessment and is therefore a crucial part of M&E.
Key components of the regulatory baseline
A legal assessment of official regulations and procedure to compile an inventory of
current relevant laws and regulations.
A detailed integrated analysis or mapping of the current official framework and
processes for regulatory procedures, including the official cost of the procedures and
the number of steps, based on information and observation from the implementing
regulatory agencies.
Regulatory process mappings can capture the process for different procedures or for the
same procedure but different types or sizes of firm. This task may be done within the
program team, or specialized assistance, for example a combination of international and local
Performance baselines
In addition to designating a baseline for the regulatory procedures, it is also important to
gather baseline data on current business regulation performance. For typical regulatory
reform interventions, this could include performance indicators such as (but not limited to):
the number and rate of businesses registered; the number and rate of licenses or permits
issued; the number of inspections conducted during a designated time period; the rate of
compliance (with any annual return requirements) and various rations of numbers tax
registered firm to the amount of tax collected.
The data records will need to be comparable given the range and diversity of business
regulations and their application. In the case of business licenses for example, firms of
different sizes and engaging in different types of business are likely to apply for different
numbers and types of licenses which may have different procedures and requirements. It will
be important to clarify the number of business activities subject to licensing in a particular
country. Following this, it may be appropriate to compile an aggregate performance indicator
which works across these different categories: i.e. the percentage of businesses whose
license applications are not processed within the legally mandated maximum time periods for
each license.
It is worth noting that the ease of compiling business registration data for example will be
highly dependent on the record keeping of the regulatory agencies. If there is limited
computerization, this may require trawling through paper–based registries. If local records
are inadequate, some simple low-cost surveys of local firms could be used to calculate proxy
indicators. This task could be carried out by the program team, a local consultancy or
business graduates could be hired and supervised by international survey experts.
Enterprise baselines
While the regulatory baseline and DB indicators capture the legal structure of business
regulations, they do not capture the perception and experience of businesses subject to
regulation. These are customer-satisfaction indicators. An enterprise baseline is
complementary to a regulatory baseline and will provide first-hand accounts of the
challenges facing entrepreneurs in firms of different sizes and from different sectors which
may not be captured in existing national studies. Data on the experience of processes and
also perceptions can be collected directly from a sample of firms. This is typically referred to
as a Business Climate Survey (BCS) or enterprise survey, and is often used to specifically
capture the perceptions and experience of MSMEs.
Appropriate surveys are costly and logistically not easy to do. But economizing on this could
be a false economy. A sound business climate survey can be a useful, if not a critical,
instrument for strengthening the business reform agenda. The higher cost can be justified by
the multiple use of the survey i.e., beyond being a baseline for M&E purposes.
Benefits of an enterprise survey.
Provides official cost of the procedures and the number of steps involved in the
process.
Monitors not only progress of the project with regard to its impact on the business
climates, but can be made available for the public and the use of other development
partners;
Produces facts for a private-public dialogue and media briefings and feeds them into
An alternative method is using a technique called ‘recall’ through qualitative research with
stakeholders. For a business regulatory reform program for example, a sample of businesses
and local authorities could be asked to recall their experiences of the regulatory procedure
and associated costs Recall is potentially valuable but often an unreliable way to estimate
conditions prior to the start of a program. However, research evidence suggests that while
estimates from recall are frequently biased, the direction and sometimes the magnitude of
the bias is often predictable so that useable estimates can be obtained. The utility of recall
can often be enhanced if two or more independent estimates can be triangulated.
There is also an issue of neutrality. If the implementing government is also responsible for
provision of data there may be a strong case for relying as far as possible on data from
credible international sources which are independent from government. This reference or
comparison will enhance the neutrality and credibility of the assessment. An added
dimension is that a country’s efforts to improve these indicators will send the right signals to
the outside world.
The local capacity for collecting, storing and analyzing data may also be limited. Many BEE
reform programs are therefore tasked with collecting this data directly, and increasingly,
working with national organizations to develop this primary data. The local capacity for
collecting, storing and analyzing data may also be limited. Many BEE reform programs are
therefore tasked with collecting this data directly, and increasingly, working with national
organizations to develop this primary data.
The key data collection tools for M&E are listed in Table 3.1 with the main features of each
tool listed alongside. This list is not comprehensive, nor is it intended to be. Some of these
tools and approaches are complementary; some are substitutes. Some have broad
applicability, while others are quite narrow in their uses. The choice of which is appropriate
for any given context will depend on a range of considerations. These include the uses for
which M&E is intended, the main stakeholders who have an interest in the M&E findings, the
speed with which the information is needed, and the cost. Different tools/instruments have
strengths and weaknesses as methods of collecting different types of data and their use with
Preparing baselines for a project is a significant task that should be started as early as
possible;
A baseline is an investment in good quality M&E and potentially the sustainability of a
project;
A good baseline maximizes the use of secondary data in the interest of cost, neutrality
and the potential for comparison;
A good baseline recognizes that the challenges of collecting primary data can be better
managed if there is clarity about what indicators need to be measured and how this will
improve the quality of M&E and IA;
Good baselines can be put to multiple use – for engaging stakeholders, communicating
with a variety of audiences and building donor co-operation and/or harmonization;
There are multiple sources of data – each with their own strengths and limitations. On-
line sources are likely to be more current.
Dear learner, if you understood this unit very well, attempt the following questions and
evaluate yourself against the answers given at the end of this unit.
Case Analysis
The M&E plan summarizes data collection methods and tools, but these still need to be
prepared and ready for use. Sometimes methods/tools will need to be newly developed but,
more often, they can be adapted from elsewhere. Illustrate the different data collection
methods for M&E from your experience.
Activities
Activity 1
Activity 2
The following summarizes key data collection methods and tools used in monitoring and
evaluation (M&E). This list is not complete, as tools and techniques are continually
emerging and evolving in the M&E field.
Case study. A detailed description of individuals, communities, organizations, events,
programmes, time periods or a story. These studies are particularly useful in evaluating
complex situations and exploring qualitative impact. A case study only helps to illustrate
findings and includes comparisons (commonalities); only when combined (triangulated)
with other case studies or methods can one draw conclusions about key principles.
Checklist. A list of items used for validating or inspecting whether procedures/steps have
been followed, or the presence of examined behaviors. Checklists allow for systematic
review that can be useful in setting benchmark standards and establishing periodic
measures of improvement.
Community book. A community-maintained document of a project belonging to a
community. It can include written records, pictures, drawings, songs or whatever
community members feel is appropriate. Where communities have low literacy rates, a
memory team is identified whose responsibility it is to relate the written record to the
rest of the community in keeping with their oral traditions.
Community interviews/meeting. A form of public meeting open to all community
members. Interaction is between the participants and the interviewer, who presides
over the meeting and asks questions following a prepared interview guide.
Direct observation. A record of what observers see and hear at a specified site, using a
detailed observation form. Observation may be of physical surroundings, activities or
processes. Observation is a good technique for collecting data on behavioural patterns
and physical conditions. An observation guide is often used to reliably look for consistent
criteria, behaviors, or patterns.
Document review. A review of documents (secondary data) can provide cost-effective
and timely baseline information and a historical perspective of the project/programme. It
includes written documentation (e.g. project records and reports, administrative
databases, training materials, correspondence, legislation and policy documents) as well
as videos, electronic data or photos.
Focus group discussion. Focused discussion with a small group (usually eight to 12
people) of participants to record attitudes, perceptions and beliefs relevant to the issues
being examined. A moderator introduces the topic and uses a prepared interview guide
to lead the discussion and extract conversation, opinions and reactions.
Introduction
Hello dear learner! This is the fourth unit of the module titled ‘Monitoring, Evaluation and
Impact Assessment’. Monitoring and evaluation are complementary and yet distinct aspects
of assessing the result of a project. The function of monitoring is largely descriptive and its
role is to provide data and evidence that underpins any evaluative judgments. As noted
earlier monitoring is ongoing providing information on where a policy, program or project is
at any given time (and over time) relative to its respective targets and outcomes. The
function and role of evaluation is to build upon monitoring data, bring together additional
information and examine whether or not the project results have been achieved. This unit is
about evaluation – the what, the who, the when and the how questions. It looks at whether
projects have achieved their outcomes (the project ‘purpose’ in logic model terms) and what
has been their impact (meeting the project ‘goal’ in a logic model terms). It addresses how to
implement good evaluation practices with the use of particular analytical techniques.
Learning Objectives:
Evaluations can be categorized in several different ways according to when they take place,
where they focus and hence what processes they use. The logic model allows for a
systematic and diagnostic review of BEE interventions and links M&E indicators and
processes to stages of the program cycle. The core evaluation criteria can also be linked to
the LF as shown by Figure 4.1. The intention is to assess:
The extent of compliance and appropriateness of the development partners’ Project
objectives and strategy with its overall goals and mandate;
The relevance of the development partners’ strategic approach and planned operations
for the planned Project interventions, the management of projects and programs being
delivered;
The effectiveness of the project activities or the services or technical assistance (TA)
provided, and
The sustainability of project or investment climate improvements achieved via the
services or TA provided.
Usually project evaluation is undertaken in line with donor reporting requirements and
typically takes place at designated stages in the program cycle (often termed mid-term or
project progress review), or immediately after the program intervention is completed (post-
program evaluation or completion reporting). Covering all of the core criteria in all
evaluations may be an ideal but is not always practical. The evaluation may be conducted at
too early a stage to assess impact or sustainability in the longer term.
However, in any evaluation it should always be possible to assess some degree of relevance,
effectiveness and efficiency as minimum criteria. The precise protocols and practices of
when, what and who is involved in undertaking evaluation and in particular assessing the
impact of interventions, varies between development partners and organizations. For the
purpose of this material, the approach for the planning and practice of evaluation is
separated into two distinct but interrelated types of activity differentiated by the timing,
For example, a BEE reform intervention will typically provide various elements of technical
assistance to the government in order to achieve specific outcomes (e.g., new enacted
legislation leading to an improved investment climate), which in turn would lead to impact
(i.e., investment flows, economic growth and employment, and poverty alleviation). The
review and impact evaluations looked at different aspects of the ‘results achieved’ as shown
in Table 4.3.
However, evaluations (especially end of project and post-program impact assessment) are
activities that are typically undertaken by independent consultants. They bring specialist
technical expertise and a sense of objectivity to the evaluation, which are two important
criteria for meeting the quality standards noted above. The consultants may come from the
The choice of who undertakes the evaluation of a project and how they are selected and
commissioned will depend upon the nature and scale of the project being assessed. The
balance and roles of those internal and external to the project and the practicalities of
planning for commissioning and managing evaluation consultants are discussed further in
forthcoming Section.
Will who does the evaluation affect diversity and/or inclusion issues?
In previous Section, the importance of ensuring that any evaluation work makes provision for
capturing issues of diversity and tries to be as inclusive as possible. Explicit steps need to be
undertaken to ensure that this happens throughout the process of designing and
implementing the evaluation approach. Consideration should be given to the questions,
which indicators are selected, which target groups are sampled, what research tools are
used, who undertakes the research and when and where research takes place. These
decisions will all influence the degree to which the diversity of stakeholders will be captured
and the level of inclusiveness achieved.
The approach (Table 4.5) is based on the logic model. It does not present a new methodology
or set of indicators but rather emphasizes three elements of impact assessment.
First, it recommends that impact assessment is brought to the fore in any
project/program planning process and that discussions involve consultation with a wide
group of stakeholders.
Secondly, it recommends that any ‘cause and effect relationships’ that are assumed to
underpin the proposed project intervention are examined and checked with key
stakeholders as part of an ex ante proposal. It is at this stage that project designers need
to consider impact for a diverse range of groups and in particular how project and
interventions are likely to impact on the disadvantaged groups. The use of analytical
Program design:
Determination of policy options that address constraints on the
private sector and project.
Selection of impact indicators – social, economic, institutional
environmental
Conduct causal chain analysis, assess impact significance
Develop scenarios
Program Establish monitoring system and ongoing monitoring
Implementation Focus groups and panels
Point of delivery surveys, score cards
Phone surveys
Mid-term assessment
Program Output-to-purpose review or purpose -to-goal review
review – ex Comparison of actual impacts and baseline
post Evaluation of implementation and performance
evaluation Determine quality of ex-ante assessment
Table 4.7: The strengths and weaknesses of different data collection tools
Method Criteria Surveys Rapid Participant Case Focus
appraisal observation studies groups
Coverage - scale of applicability High Medium Low Low Low
Representative High Medium Low Low Low
Ease of quantification High Medium Low Low Low
Ability to isolate /measure non- High Low Low Low
project causes of change
Speed of delivery Low High Medium High High
Expense of design and Delivery High Medium Medium Low Medium
Ease of quantification High Low Medium/ Low Low
Low
Ability to isolate and High Low Low Low Low
measure non-project causes of
change
For example, where the changes in the time, duration and cost of regulative compliance are
of interest, then it is valuable to survey a large representative sample of businesses
experiencing these regulations. The focus is to capture experiences of compliance in terms of
consistent measurable terms such as such as frequency, time and cost.
In a situation that affects several parties with different interests, representatives of all
parties, as well as some neutral respondents, should be interviewed. This provides a
triangulation effect that largely helps to verify information, cuts through conflicting
evidence, and reveals insights, in a cost-effective way
Performance scoring
Some organizations use scoring systems as an integral part of the review process to rate
aspects of performance; for example, the likelihood that the outputs and outcomes of the
project will succeed (or have succeeded, depending on when the scoring is done). Annual
scoring can provide important data for accountability, learning and decision making. With
Scoring systems are particularly useful for ‘process-oriented’ project interventions, such as
regulatory governance or PPD initiatives. For example, PPD forums have been asked to assign
a score from one to five to monitor government progress on project proposals. This can be
presented visually, as illustrated in figure 4.3.
Another useful tool – the evaluation wheel - has been developed to rate, analyze and present
performance on 12 aspects of activities (see figure 4.3). By plotting scores for each of these
aspects along the spoke of the wheel, the ‘shape’ of performance for each dimension of the
work can be observed and discussed. Each aspect on the wheel has associated indicators for
measurement and a scoring system (from 0 = not satisfied to 5 = very satisfied) enabling the
cross checking of data on similar aspects of the wheel.
This raises the question of what standards to adopt as a reference point. The standard will
sometimes be predetermined and will in other cases depend either on the terms of reference
given to the evaluation team or the evaluator’s own professional judgment. In its simple
form, CBA is carried out using only financial costs and financial benefits. For example, a
simple cost benefit ratio for a road scheme would measure the cost of building the road, and
compare this to the economic benefit of improving transport links. It would not measure
either the cost of environmental damage or lower congestion or encouragement of new
business activity attracted by improved transport links. The CBA analysis depends on the
timeframe of the costs and benefits being examined.
Costs are either one-off, or may be ongoing.
It is important to build this effect of time into the analysis by calculating the net present value
including a discounted rate over time to reflect the opportunity cost of using resources. CBA
of a project or program can become an extremely complex exercise if all of the variables are
considered, especially where the non-financial variables are many and difficult to quantify. A
more sophisticated approach to building a cost benefit model is to try to put a financial value
on intangible costs and benefits. This can be highly subjective.
A different form of cost benefit quantification exercise can be undertaken using the results
from an enterprise survey to estimate the saved costs to the average business, and from this
extrapolating the total savings to the economy as a whole. In effect the economic impact.
Undertaking CBA as part of project evaluation can be useful but it is important to note that
this technique has both advantages and limitations.
Advantages Limitations
A powerful, widely-used tool for CBA can only be carried out reliably by
estimating the efficiency of programs using financial costs and financial
and projects. benefits. If intangible items are included
It can be used to help look at the ex-post within the analysis an estimated value is
impact of an intervention – did the required for these. This inevitably brings
investment generate the benefits an element of subjectivity into the
(savings or returns) predicted or process.
expected Fairly technical, requiring adequate
Can be useful tool for ex ante assessment financial and human resources.
when deciding whether to go forward Requisite data for cost-benefit
with a project - does it look as if it will calculations may not be available, and
generate sufficient benefits to justify projected results may be highly
going ahead? dependent on assumptions made.
Where costs or benefits are paid or Results must be interpreted with care,
received over time, it is possible to particularly in projects where benefits are
calculate the time it will take for the difficult to quantify.
benefits to repay the costs.
Project cycle management (PCM) already offers basic instruments but requires
supplementary tools that give more emphasis to context and impact. In formulating a goal
and project purpose, planning takes a wider view of the project's context. Concrete results
and activities are then defined to fulfill the purpose and contribute to the goal. But in
contrast to planning, M&E focuses mostly on the outputs – i.e. the performance – of a project
(result level). Therefore, it should be supplemented by impact monitoring and assessment
(IMA), in order to restore the wider view of the context present during planning.
A project may trigger changes in its context through its outputs. But it is the stakeholders
who actually make the changes through social processes such as learning, adaptation,
rejection, etc. Therefore it is necessary that stakeholders are actively involved in the IMA
procedure from the beginning. Stakeholders bring their deep knowledge and perception of
the context into the analysis of problems and alternatives (Step 2). They provide a large
number of positive and negative impact hypotheses which may otherwise be overlooked by
the project team (Step 3), and they provide local indicators (Step 4). They become actively
involved in observation and data collection (Step 5), and changes in the context cannot be
assessed without them (Step 6). At the end of a project phase, stakeholders provide new
opportunities for improving the project's work.
Information Management
Participatory IMA can only be successful if it is transparent and if the information collected is
relevant to different stakeholder groups. For each group, information must be presented in
an appropriate and understandable form or media. Similarly, the means of communication
and dissemination of information are determined by the needs of each group. Finally,
information must be stored accessibly for everyone who is interested in it.
The elements of a context – i.e. people, institutions, resources, etc. – are highly inter
connected, and not all elements and interrelations are known, even to insiders. Stakeholders
with their different agendas represent an additional degree of uncertainty and
unpredictability.
A problem within such a system (e.g. soil degradation) usually has complex causes and
consequences, and also a "solution" to it (e.g. soil conservation) will create multiple, positive
and negative side-effects. Consequently, a problem cannot be solved with a "repair-shop
mentality", i.e. tackling only the most obvious cause. Because the reactions of a system
cannot be precisely predicted, a project in a rural context cannot be expected to provide
simple solutions. It can only provide various "impulses", such as enhancing co-operation and
training stakeholders, introducing a new technology, etc. in order to stimulate partners to
move the context in a certain direction. And because it is not certain whether these impulses
will finally lead to the desired changes, there is a need to observe and assess the changes
constantly to decide which impulses to give next.
The following principles and examples can help to make a definite selection of impact
indicators:
In preparing for impact assessment, some more important details need to be considered:
Ideally, all stakeholders agree on a common rating for all impact indicators. But it can
also be interesting to carry out impact assessment separately for each stakeholder
group, and each group's findings will be communicated to the others.
It should be determined at what level the assessment will be made (household,
community, etc.). For example, if there is a great heterogeneity of household categories
(such as poor and wealthy households), changes in their context should be assessed
individually, or at least separately for each household category. If all households are
judged together at the community level, the result will be an average. This average,
however, may not reflect important changes in individual households. It would thus be
meaningless!
After a set of impact indicators has been selected, an initial observation (monitoring)
that takes all of them into account produces the baseline. In the first years to come,
monitoring and assessment will only include those indicators that are sensitive to short-
term changes. Indicators sensitive to mid- or long-term changes will gradually be added
after several years.
Triangulation
How good is the quality of the information obtained? If the budget for monitoring is low, not
all methods can be highly accurate. Therefore, the principle of triangulation is used, which
combines reliability with participation. This means that all individual perceptions which are
obtained through interviews and discussions must be cross-checked with the perceptions of
others and, if possible, compared with direct observations.
2. Photo-Monitoring
Photo-monitoring provides an overview of visible changes in the project context, which may
be predominantly related to biophysical and economic issues. But photos require
interpretation and further investigation of the background. This can be done through
interviews and discussions, as well as during participatory transect walks, depending on
which aspects need further clarification;
Development cooperation is intended to initiate changes, and at least some of them should
be visible after a couple of years. Rural development projects, for example, should enhance
household income and living standards, which would then be visible in terms of better
housing and clothing, more children going to school, better means of private and public
transport, etc. Similarly, if land and resource management has become more sustainable, it
should be evident in improved crop stands, controlled soil degradation, effective
conservation measures, etc. Photo-monitoring is a comprehensive method for documenting
all visual changes that can be used to cross-check individually perceived changes.
Several series of photos from specific locations and standpoints taken at different times over
a longer period document how things change. Photo documentation can range from
overview pictures (e.g. showing an entire slope, valley, farm, village, etc.) to detailed views of
The fact that interviews and discussions with people bring to light useful information for IMA
should not lead to the conclusion that direct observations and measurements by project staff
or outsiders are no longer necessary! Particularly biophysical and some economic aspects can
be directly observed in the field to cross-check the results of other methods. A participatory
transect walk will not only provide a detailed view of a farm or valley, critical sites of resource
degradation and areas of promising management. It will also help to establish connections
between those sites, i.e. flows of nutrients, water, sediment and energy. Thus regular
transect walks, as well as farm and field visits are not only recommended to maintain close
contact with local stakeholders and their reality. Different indicators and parameters also
require different observation times. For example, pests and diseases are observed during the
cropping season, production during harvest, soil degradation at the onset of a rainy season,
water shortage during the dry season, etc.
Table 4.7 Principles and guiding questions provide assistance when adapting monitoring
methods
How did the context change in the eyes of different stakeholders? What did they learn from
these changes? In Step 4 (selection of impact indicators) stakeholders prepared an
assessment (fixing benchmarks and rating). Impact indicators can be grouped and placed
according to dimensions of sustainability (social / institutional, economic, ecological), in order
to visualize in which dimensions changes are moving towards or away from sustainability. All
units (e.g. kg, minutes, tons, etc.) have already been converted into a neutral numeric scale
Changes in the context can be considered the result of social processes, i.e. interactions
between individuals or groups, such as learning, adaptation, communication, decision,
integration, etc. The project "only" tries to trigger or strengthen these processes with its
outputs. For example, any new technology must be utilized and adapted or rejected by
stakeholders; members of a society communicate their experience and learn from it; when
the biophysical environment or the economic situation changes, people adapt their
perception and react to it. The question for a project is whether the project outputs have
stimulated changes and social processes, and whether these processes are likely to help
reach development goals.
Follow-Up
At this stage, the next phase of project management begins. Assessment and the attribution
of changes will be used to make the necessary strategic adjustments in the project. At the
same time, the IMA system needs to be adapted as well. In order to achieve positive impacts:
Are there new stakeholder groups that should be involved during the next project phase
(Step 1)?
Is the analysis of the project context still relevant and representative (Step 2)?
1. What was the situation before the project? Provision of evidence for the project
indicators are chosen prior to, or at the beginning of the project. Data collected at this
time is normally referred to as ‘baseline’ data and acts as the starting benchmark for the
evaluation work.
2. What has happened after the project has occurred? An ability to provide evidence
relating to and on the output and outcome indicators chosen for key target beneficiaries
of your project. This evidence when combined with the baseline will provide a basis by
which directly comparisons can be made of the circumstances, experiences, attitudes and
opinions of those to whom the project is directed both before and after.
3. What has happened because of the project? An ability to assess whether impact has
occurred due to the project requires some form of assessing results ‘with’ vis-à-vis
‘without’ the project. This is usually achieved by assigning some form of control or
Different evaluation approaches with their associated methodologies make provision for
attribution and the counterfactual to a greater or lesser extent. Three of the main
approaches to evaluation given below which also assesses the degree to which they help
overcome these validation challenges.
A key element for ensuring that the approach is as robust as possible is the use of rigorous
sampling techniques in selecting relevant and representative subjects for the evaluation
exercise. Where possible the target groups should be selected randomly. For example, if a
business simplification intervention is trying to improve the operating conditions for
construction businesses in city ‘A’ then a sample of existing construction businesses who
have been operating in city ‘A’ would be selected for the impact evaluation rather than
printing businesses or construction business just starting in city ‘B’.
Post Project Judgment Evaluation for BEE Reform
Strengths Limitations
It is low cost compared to other This approach relies on program
A key element for ensuring that this approach is as robust as possible is the use of rigorous
sampling techniques. Ideally the target groups for the evaluation should be selected
randomly and within the parameters of the specific stakeholder population. The target
groups selected for BAA must be:
Relevant to the project being examined: they must come from those individuals and
groups who are key stakeholders for the intervention activity being evaluated.
Taking these sampling factors into account and establishing a relevant and representative set
of individuals or groups will also help to determine the total numbers to be included in the
evaluation group. Alternatively, it could be done if the simplified procedure is being rolled out
as a pilot so that control and treatment groups can be identified. It should be noted that the
ethical and political considerations of undertaking this type of study make it challenging.
Both groups should be matched on the basis of either a few observed characteristics or a
larger number of characteristics that are known or believed to influence program outcomes.
In practice, it is rarely possible to construct a 100% perfectly matched control group, or even
to measure all possible relevant characteristics. Nevertheless, matching can be achieved for
key characteristics and this is widely regarded as a rigorous methodology when evidence is
available to show that treatment and control groups are similar enough to produce a close
approximation to the perfect match.
Quasi Experimental designs for BEE Reform
3. Experimental Designs
Bias can occur for a host of reasons and take many different forms. For example, sampling
bias occurs in the selection of target groups when only those who have offices within a short
distance of the one stop shop are included. As noted earlier in this section, practical attempts
are made to mitigate this bias by the hiring of external experts who are not connected with
the project and have the technical expertise to ensure that appropriate methodology design
and sampling is conducted. However some would argue that the only robust way of tackling
bias is by using experimental designs in evaluations. Randomization is a key feature of
experimental approaches. This is considered the most rigorous of the evaluation
methodologies, the ‘gold standard’ in evaluation. This is especially the case when we are
trying to estimate the effect of an intervention on a complex concept of the BEE.
In a randomized experiment, the researcher cannot manipulate the group who are ‘exposed’
to the intervention (the “treatment group”) and the not-exposed group (the “control
Table 4.9: Summary of key characteristics for different evaluation approaches for impact
Evaluation activity Post Before Quasi Experimental
Program and After Experimental
Judgment
Post project assessment V V V V
Before project assessment X V V V
Use of target groups V V V V
Use of control groups X X V V
Use of randomly selected X X X V
groups
Level of technical skills Low Medium High Very High
needed to design
Cost of undertaking Low Medium High Very High
Practice in M&E for project is currently being developed rapidly and new techniques and
tools being developed all the time. Measurement, quantification and evidence-based policy
making are becoming increasingly dominant features in the approach of many countries. In
summary the issues of monitoring evaluation and assessing impact for project is a hive of
development and debate. This material presents a resource that brings together examples
from current practice in order to help raise awareness, engage interest and improve good
practice across different projects.
Activity 2
Answer the following questions.
1. Factors affecting the quality of M&E information.
2. The measurement of impact is challenging, can be costly and is widely debated. Briefly
discuss some of the challenges of measuring impact.
Dear learner, if you understood this unit very well, attempt the following questions and
evaluate yourself against the answers given at the end of this unit.
Case Analysis
Project/programme M&E steps are guides to planning for and implementing an M&E system
for the systematic, timely and effective collection, analysis and use of project/programme
information. They are interconnected and should be viewed as part of a mutually supportive
M&E system. Develop your own six key Steps for Project/ Programe M&E System adapting
the approaches implemented in various projects.
Activities
Activity 1
1. M&E plans are becoming standard practice – and with good reason. M&E plans serve
as critical cross-checks of the log frames, ensuring that they are realistic to field
realities. Another benefit is that it helps to transfer critical knowledge to new staff
and senior management, which was particularly important with projects/programmes
lasting longer years. A final point to remember is that it can be much more timely and
costly to address poor-quality data than to plan for its reliable collection and use.
2. Data analysis is not something that happens behind closed doors among statisticians,
nor should it be done by one person, e.g. the project/programme manager, the night
before a reporting deadline. Much data analysis does not require complicated
techniques and when multiple perspectives are included, greater participation can
help cross-check data accuracy and improve critical reflection, learning and utilization
of information. A problem, or solution, can look different from the perspective of a
headquarters’ office versus project/programme staff in the field versus community
members. Stakeholder involvement in analysis at all levels helps ensure M&E will be
accepted and regarded as credible. It can also help build ownership for the follow-up
and utilization of findings, conclusions and recommendations.
Activity 2
Learning Objectives:
These are all vital to getting a ‘feel’ of what the nature and scope, the resources involved and
a sense as to whether there is any interest and or commitment to M&E by the various
stakeholders of the project. This information provides the context in which M&E will be
designed.
Impact assessment is not usually a program team’s responsibility per se but one that is
undertaken by external consultants and/or evaluation specialists within the organization.
However the program team is responsible for ensuring that their monitoring systems and
evaluation findings provide evidence for impact assessment, and therefore they need to be
aware of what and how impact assessment is undertaken.
The following section walks through each of the six preparation aspects.
1. Questions: Identify the key questions to be asked and answered by the M & E.
Usually the easiest way of establishing key questions is to look at the project Log Frame or
the equivalent project planning document. For example:
Monitoring questions:
1. How many procedures does it take to register a business currently and then after
reforms?
2. How many and which government authorities need to be engaged in the reform
efforts?
3. How many and which government officials need to be trained to undertake the
change needed by the reform?
Evaluation questions:
1. Have laws/regulations changed because of reform work?
2. Has the cost of registration for each process changed under reform?
Identifying the key questions to be answered in M&E is discussed in the previous Section.
The PM quick checklist
1. Does this project have a log frame?
2. What is the learning from previous Projects of this type?
3. What are the key questions I need to answer in my M&E?
4. What will I have to do to integrate the program management with the M&E cycle?
Good practice suggests that it is vital to make sure that informed decisions about the
methodology and approach are taken at the earliest stage of the project design.
The PM quick checklist
1. Can I confidently select the best M&E approach and methodology?
Quasi-experimental designs
Non-experimental designs
2. What has been learned from previous designs?
3. Can I create a robust baseline from existing sources or do I need primary data?
4. Do I know who and how to sample?
5. Do I know who to talk to for advice and guidance?
4. Data collection: Select tools and instruments for data collection and analysis
At this stage, a quick audit will show what information is available through existing
documentation. Plans about what needs to be generated through project data collection and
how best to do this can be agreed. Table 5.3 presents a simple audit sheet for doing this.
7. Resources: Identify people and other resources for undertaking the M&E.
Worked through steps 1-5 will result in a clear perspective on what form and level of skills and
experience will be needed for undertaking the proposed M&E work. Note that resources for
dissemination of the findings and experiences are not always put in place and there is no
point in having developed the entire above if there is any opportunity to show-case the
success.
The PM quick checklist
1. Which of the internal M&E team will be involved with working on this project?
2. Does there need to be any capability building undertaken for this to take place?
3. How will findings and learning be disseminated?
4. What tasks need to be undertaken by an external consultant – local or international?
5. Where will the funds come from?
The issue of cost is a valid and important concern for M&E and the Principles for Evaluation of
Development Assistance require the efficient undertaking of M&E as well as efficient project
delivery. The overall budget for and scope of M&E activities for any given project must bear
some relationship to the scale and scope of the aid intervention being assessed. Larger more
complex projects addressing large populations of businesses and/or people will usually have
more extensive and hence expensive M&E systems. Similarly an innovator project may
warrant more effort and resource for M&E because of having to develop new approaches.
Likewise a pilot type of activity may involve more intensive M&E work over a shorter period
of time in order to assess whether or not it should be ‘rolled out’ more widely.
If the methods, tools, and staff options chosen exceed the available budget then this will
need to be reviewed. Different more restrictive choices have to be made on the methods and
tools to be used or more resource needs to be negotiated. The budget should be
benchmarked in three ways against:
the costs of other similar M&E activities;
the M&E of similar projects; and
the ‘rules of thumb’ i.e., an upper limit of 5% of the overall project budget, except for
experimental or more substantive projects where a guide of nearer 10% is usually given.
M&E budgets have been what might be termed ‘outline budgets’ primarily concerned with
evaluation activities and focusing on covering the costs of end of project evaluation and
inputs from external consultants. The increasing focus on ‘proving’ development results and
the development of more detailed and sophisticated M&E practices means there is an
imperative to put together more detailed M&E budgets and plans.
Activity 1
Answer the following questions.
1. How much money should be allocated for M&E?
2. Define the approach to be used to monitor the project costs.
What are the key tasks for implementing the M&E plan?
The project manager has specific responsibilities for implementation. These are likely to
include:
Briefing of internal PM/M&E officers on overall plan and their key role in monitoring and
evaluation work.
Selection and briefing of external consultants for periodic evaluation work.
Ensuring any baseline survey work is initiated.
Recording data for quasi-experimental methodologies and large-scale surveys can require
specialist tools and expertise. Typically a statistical package is required to store and handle
data.
Data needs to be analyzed for different groups, compared between groups and over time
periods. External expertise may be required for the analysis of data, both in terms of
guidance as to what tools should be used and related to this, how data should be recorded
and stored as well as undertaking the actual analysis once the data has been collected.
First stage baseline and mapping work. If a project involves undertaking a baseline or
mapping exercise then the findings from this work need to be analyzed and reported
quickly because they form an integral base from which the project proceeds and will
often determine what tasks will be progressed and which will not.
Pilot phases or pilot work. A project may involve undertaking a pilot phase, where
something will be tested out with a group or a particular locality before the project is
‘rolled out’ further. Again it is important that the analysis of M&E data from this pilot is
undertaken thoroughly and quickly, as the findings from this are needed to inform the
progression of the project.
Mid-term or periodic evaluative reviews - key findings from periodic evaluation work
usually from the midterm timeframe of the project onwards need to be analyzed and
reported in a timely manner as they illustrate whether the outputs of the project are
being achieved or not and whether process issues are progressing. The findings from
these mid-term evaluations inform the ongoing validity of the M&E plan for assessing
outcomes and impact for the project. If initial findings show that the project is not
achieving and or is achieving in an unexpected way then the M&E plan may need to be
reviewed and updated for the end of project evaluation activities. This analysis of
project/program results is based on objectives and indicators, results hypotheses and
results chains, data and information obtained from the results oriented monitoring.
End of project evaluation. This is usually the most substantive analysis as it is bringing all
of the above together, as well as undertaking end of project evaluation data collection
analysis and reporting. This is the key time of activity for M&E work if findings are to be
processed and reported in a timely manner after the end of the project. Therefore
resources need to have been in place and tasks managed well during this period. This
evaluation will always involve external people – colleagues from the central evaluation
The Beneficiaries - those whose lives were to be made better by the project. Is the market
now a better place for doing business? They could be the private sector and the enterprises
themselves, or through the associations, chambers, and trade associations.
The Implementers – those who are involved in managing and implementing the day-to-day
activities that have been under reform. Can targets now be met more effectively and
efficiently? They would be primarily government officers, compliance agency staff and
business support agencies to a lesser extent.
Other Interested parties – what do the findings tell other groups about the project? Is this a
good place to invest in? Is setting up a business straightforward? How long does it take to
register a business now? The findings may be of interest to researchers, business
Activity 2
Answer the following questions.
1. Reporting can be costly in both time and resources and should not become an end in
itself, but serve a well-planned purpose. Therefore, it is critical to anticipate and carefully
plan for reporting. Briefly discuss key reporting criteria to help ensure its usability.
2. An essential condition for well-formulated recommendations and action planning is to
have a clear understanding and use of them. Differentiate among the different data
analysis terms such as such as Findings, Conclusion, Recommendation and Action.
3. As with the reporting formats themselves, how reporting information is disseminated will
largely depend on the user and purpose of information. There are several media to share
information but describe some of them.
4. The overall purpose of the M&E system is to provide useful information. Therefore,
information utilization should not be an afterthought, but a central planning
consideration. There are many factors that determine the use of information. Identify and
describe stakeholder informational needs.
M&E should be fully integrated into project cycle and project management systems from
the start.
PMs must have an integral role in designing and planning M&E. PMs may not be
responsible for all M&E tasks.
Identify the key questions to be asked and answered by the M&E early in the process.
Milestones and operational plans should be developed in a participatory way with
representatives of the partner organizations;
Effective communication can build support for the process of change, accelerate
acceptance and contribute to the sustainability of a reform.
Dear learner, if you understood this unit very well, attempt the following questions and
evaluate yourself against the answers given at the end of this unit.
1. M&E plan is a table that builds upon a project/programme’s log frame to detail key M&E
requirements for each indicator and assumption. M&E plan is sometimes called different
names by various users, such as an “indicator planning matrix” and a “data collection
plan”. While the names (and formats) may vary, the overall function remains the same –
to detail the M&E requirements for each indicator and assumption. Develop your own
M&E activity planning table for a particular project.
2. Most M&E reporting will be undertaken through the organization’s project management
systems. However, the timing of reporting should be planned. Develop a Reporting
schedule for any virtual project.
3. Different stakeholders provide guidance for project/programme reporting format. The
purpose of the reporting format is to emphasize key information to inform
project/programme management for quality performance and accountability. As a
stakeholder of a project, propose any project/programme management report format.
4. It is important that report formats and content are appropriate for their intended users.
How information is presented during the reporting stage can play a key role in how well
it is understood and put to use. Briefly discuss some practical tips to help make your
written reports more effective.
Activities
Activity 1
1. There is no set formula for determining the budget for a project/programme’s M&E
system. During initial planning, it can be difficult to determine this until more careful
attention is given to specific M&E functions. However, an industry standard is that
between 3 and 10 per cent of a project/programme’s budget be allocated to M&E. A
general rule of thumb is that the M&E budget should not be so small as to compromise
the accuracy and credibility of results, but neither should it divert project/programme
resources to the extent that programming is impaired. Sometimes certain M&E
functions, especially monitoring, are included as part of the project/programme’s
activities. Other functions, such as independent evaluations, should be specifically
budgeted.
2. Monitoring the project costs enables an assessment of whether the project is
operating within the approved budget. One of the most common methods of
monitoring project costs is simply to compare the amount spent on producing a
deliverable at a point in time compared to the budgeted spend at the same point. This
however makes the implicit assumption that production of the deliverable is in line
with the schedule. This potential abnormality can be overcome through the use of a
technique called Earned Value which takes a three-way view of planned achievement
and cost with actual achievement and cost. This technique is discussed in the Project
Cost Management. The Earned Value approach may provide the Project Office and
Project Manager advance warning that an individual deliverable may not be produced
within the expected budget or that the project as a whole may not deliver within an
agreed budget. Typically, there will be a pre-defined tolerance, within which there is
no need to escalate or formally report on a budget variation. An alternative approach
is to adopt a process whereby the producer of each deliverable periodically updates
the estimate of the time and/or effort required to complete an activity (often referred
Activity 2