0% found this document useful (0 votes)
140 views

Project Monitoring and Evaluation - MAPM 709

This document provides an introduction to a course on project monitoring and evaluation. It discusses key concepts in monitoring and evaluation including results-based management, the differences between monitoring and evaluation, and the importance of monitoring and evaluation in the project cycle. It also covers establishing baselines, challenges of monitoring and evaluation, and standards and ethics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
140 views

Project Monitoring and Evaluation - MAPM 709

This document provides an introduction to a course on project monitoring and evaluation. It discusses key concepts in monitoring and evaluation including results-based management, the differences between monitoring and evaluation, and the importance of monitoring and evaluation in the project cycle. It also covers establishing baselines, challenges of monitoring and evaluation, and standards and ethics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 173

Addis Ababa University

School of Commerce
Supported Distance Education Program

Project Monitoring and Evaluation Module

(MA Degree in Project Management)


April, 2016
Addis Ababa, Ethiopia

0 Addis Ababa University/School of Commerce


Module Introduction
Hello dear learner! Welcome to the Course “Project Monitoring and Evaluation”. It has 150
total study hours. For better understanding of the course, it is mandatory for you to have a
preliminary knowledge of “Project Cycle Management”. Good planning, monitoring and
evaluation enhance the contribution of an organization by establishing clear links between
past, present and future initiatives and results. Monitoring and evaluation can be effective
tools to enhance the quality of project planning and management. Monitoring helps project
managers and staff to understand whether the projects are progressing on schedule and to
ensure that project inputs, activities, outputs and external factors are proceeding as planned.
Evaluation can be a tool to help planners and managers assess to what extent the projects
have achieved the objectives set forth in the project documents.

Monitoring and evaluation can help organization extract relevant information from past and
ongoing activities that can be used as the basis for programmatic improvement,
reorientation and future planning. Without effective planning, monitoring and evaluation, it
would be impossible to judge if work is going in the right direction, whether progress and
success can be claimed, and how future efforts might be improved. The purpose of this
course is to promote a common understanding and reliable practice of monitoring and
evaluation (M&E) for a project/program. It familiarizes learners with various project
monitoring and evaluation systems and tools that focus on results in international
development. The course also offers learners both a conceptual framework and practical skill
development. The module covers the following topics: Introduction to Monitoring and
Evaluation; Frameworks and Indicators for M&E; Conducting Baselines and Collecting Data;
Evaluation and Impact Assessment and finally The Project Cycle of M&E.

Dear learner! Please read carefully the learning objectives of each unit before you proceed
studying the content. When you read each unit, you should give focus for each topic and
don’t proceed to next topic before you fully aware of all points discussed. It can enhance
your understanding if you underline or highlight points that are important.

1 Addis Ababa University/School of Commerce


In order to evaluate yourself how much you have understood the topic and relate what you
have learnt with real world situations, several Activities and Self Assessment Questions (SAQs)
are given in each unit. Both of them have answers provided at the end of each unit. Please
attempt them before you look the feedback. This approach helps you very much to gain a
complete understanding of the subject matter. In case, if you find the questions difficult to
give an answer, please refer back again the topic related to the question.

As a distance learner, you will be evaluated through Tutor Marked Assignments (TMAs) and
Examination. TMAs are provided along with the Module. They should be sent to the tutor on
time to be marked and included in your final grade.

Indeed, this Module is complete by itself to understand the course “Project Monitoring and
Evaluation”. However, for further reading and understanding, it is advisable to use the text
and reference materials listed at the end of the Module.

Course Objectives:
After successfully completing this course, you will be able to:

1. Describe the basic thoughts behind Results-based management (RBM);


2. Differentiate between Project Monitoring and Evaluation;
3. Identify the frameworks and systems for the planning and management of projects;
4. Establish baselines against which change can be measured;
5. Identify alternative Evaluation techniques;
6. Describe the objectives and approaches of assessing impact;
7. Apply the Project Cycle of Monitoring and Evaluation;

2 Addis Ababa University/School of Commerce


Unit Table of Contents Page No.
Course Introduction 1
Table of Contents 2
ACRONYMS 5
1 Introduction to Monitoring and Evaluation 7
1.1 Results-based management (RBM) 8
1.2 What is Monitoring? 10
1.3 What is Evaluation? 13
1.4 Key terms and concepts in Monitoring and Evaluation 16
1.5 What is the purpose of Monitoring and Evaluation? 18
1.6 Why is Monitoring and Evaluation important? 19
1.7 Monitoring and Evaluation and the Project/ Programme cycle 21
1.8 Baseline and end line studies 23
1.9 Comparing Monitoring, Evaluation, Reviews and Audits 24
1.10 Monitoring and Evaluation Standards and Ethics 25
1.11 Minimize bias and error 26
1.12 The challenges of Monitoring and Evaluation 28
Summary 32
Self Assessment Questions-1 33
Answer Key to Activities and Self Assessment Questions 34
2 Frameworks and Indicators for Monitoring and Evaluation 38
2.1 The Logical Framework approach 39
2.1 Results-oriented approaches 45
2.3 Understanding indicators 47
2.4 Selecting indicators and setting targets 49
2.5 Using comparable and core indicators 54
Summary 58
Self Assessment Questions-2 59

3 Addis Ababa University/School of Commerce


Answer Key to Activities and Self Assessment Questions 60
3 Baselines and Data for Monitoring and Evaluation 70
3.1 Establishing baselines 71
3.2 Accessing and using secondary data 77
3.3 Collecting and using primary data 78
Summary 82
Self Assessment Questions-3 83
Answer Key to Activities and Self Assessment Questions 84
4 Monitoring, Evaluation and Impact Assessment 89
4.1. Planning an evaluation 90
4.2 Evaluation techniques 97
4.3 Impact Monitoring & Assessment 106
4.4 Forthcoming developments in M&E 136
Summary 137
Self Assessment Questions-4 138
Answer Key to Activities and Self Assessment Questions 139
5 The Project Cycle of Monitoring and Evaluation 142
5.1 Introduction 143
5.2 Agreeing the starting point 143
5.3 Identifying the approach and securing a budget 145
5.4 Implementing the M&E Plan 152
5.5 Analyze M&E Findings 154
5.6 Communicating M&E Findings 156
Summary 162
Self Assessment Questions-5 163
Answer Key to Activities and Self Assessment Questions 164
Bibliography 170

4 Addis Ababa University/School of Commerce


ACRONYMS
BAA: Before and After Assessments
BE: Business Environment
BEE: Business Enabling Environment
BEEP: Business Environment and Enterprise Surveys
CBA: Cost benefit analysis
DB: Doing Business
DFID: Department for International Development
FG: Focus Group
GTZ: Deutsche Gesells chaft für Technische Zusammenarbeit
IFC: International Finance Corporation
IA: Impact Assessment
IIAA: Integrated Impact Assessment Approach
LF: Logical Framework or Log Frame
LFA: Logical Framework Approach
M&E: Monitoring and Evaluation
OECD/DAC: Development Assistance Committee
OVI: Objective Verifiable Indicators
QQT: quantity, quality and time
SoV: Sources of Verification
MSME: Micro, Small and Medium Enterprise
OECD: Organization for Economic Co-operation and Development
OVI: Objectively Verifiable Indicators
PCM: Project/Program Cycle Management
PLM: Project/Program Logic Model
PMER: Planning, Monitoring, Evaluation and Reporting
PPD: Public Private Dialogue
PPJ: Post Project/Program Judgment
PSD: Private Sector Development
PSS: Private Sector Savings
5 Addis Ababa University/School of Commerce
QED: Quasi-Experimental Design

6 Addis Ababa University/School of Commerce


Session 1
This session consists of two units.

Unit 1: Introduction to Monitoring and Evaluation


Unit 2: Frameworks and Indicators for Monitoring and
Evaluation

Unit 1
Introduction to Monitoring and Evaluation
Introduction
Hello dear learner! This is the first unit of the module titled ‘Introduction to Monitoring and
Evaluation’. Monitoring and Evaluation is a systematic collection and analysis of information
to enable managers and key stakeholders to make informed decisions, uphold existing

7 Addis Ababa University/School of Commerce


practices, policies and principles and improve the performance of their projects. Monitoring
and Evaluation is about feed-back from implementation. The ultimate purpose of Monitoring
and Evaluation is change for the better. This unit intends to provide some basic information
and practical guidelines on project monitoring and evaluation in order to enhance better
understanding on project monitoring and evaluation.

Learning Objectives:

At the end of this unit lesson, you will be able to:

1. Explain the basic outlook behind Results-based management (RBM);


2. Differentiate between Project Monitoring and Evaluation;
3. Define a number of widely recognized concepts and terms in the actual practice of
Monitoring and &Evaluation;
4. State the purposes of Monitoring and Evaluation;
5. Identify Monitoring and Evaluation Standards and Ethics;
6. Recognize the challenges of Monitoring and Evaluation.

1.1. Results-based management (RBM)


RBM is an approach to project/programme management based on clearly defined results,
and the methodologies and tools to measure and achieve them. RBM supports better
performance and greater accountability by applying a clear, logical framework to plan,
manage and measure an intervention with a focus on the results you want to achieve. By
identifying in advance the intended results of a project/programme and how we can measure
their progress, we can better manage a project/programme and determine whether a

8 Addis Ababa University/School of Commerce


difference has genuinely been made for the people concerned. Monitoring and evaluation
(M&E) is a critical part of RBM. It forms the basis for clear and accurate reporting on the
results achieved by an intervention (project or programme). In this way, information
reporting is no longer a headache, but becomes an opportunity for critical analysis and
organizational learning, informing decision-making and impact assessment.

Fig.1.1. Results based Management Cycle

Good RBM is an ongoing process. This means that there is constant feedback, learning and
improving. Existing plans are regularly modified based on the lessons learned through
monitoring and evaluation, and future plans are developed based on these lessons.

Monitoring is also an ongoing process. The lessons from monitoring are discussed
periodically and used to inform actions and decisions. Evaluations should be done for
programmatic improvements while the programme is still ongoing and also inform the

9 Addis Ababa University/School of Commerce


planning of new programmes. This ongoing process of doing, learning and improving is what
is referred to as the RBM life-cycle approach, which is depicted in Figure 1.

RBM is concerned with learning, risk management and accountability. Learning not only
helps improve results from existing programmes and projects, but also enhances the capacity
of the organization and individuals to make better decisions in the future and improves the
formulation of future programmes and projects. Since there are no perfect plans, it is
essential that managers, staff and stakeholders learn from the successes and failures of each
programme or project.

There are many risks and opportunities involved in pursuing development results. RBM
systems and tools should help promote awareness of these risks and opportunities, and
provide managers, staff, stakeholders and partners with the tools to mitigate risks or pursue
opportunities.

RBM practices and systems are most effective when they are accompanied by clear
accountability arrangements and appropriate incentives that promote desired behaviour. In
other words, RBM should not be seen simply in terms of developing systems and tools to
plan, monitor and evaluate results. It must also include effective measures for promoting a
culture of results orientation and ensuring that persons are accountable for both the results
achieved and their actions and behavior.

The main objectives of good planning, monitoring and evaluation—that is, RBM— are to:
 Support substantive accountability to governments, organizations, beneficiaries,
donors, other partners and stakeholders;
 Prompt corrective action;
 Ensure informed decision making;
 Promote risk management;
 Enhance organizational and individual learning.

1.2. What is Monitoring?

10 Addis Ababa University/School of Commerce


Monitoring is the routine collection and analysis of information to track progress against set
plans and check compliance to established standards. It helps identify trends and patterns,
adapt strategies and inform decisions for project/programme management.

Fig.1.2. Monitoring questions and the log frame

Fig.1.2 summarizes key monitoring questions as they relate to the log frame’s objectives.
Note that they focus more on the lower-level objectives – inputs, activities and (to a certain
extent) outcomes. This is because the outcomes and goal are usually more challenging
changes (typically in knowledge, attitudes and practice/behaviors) to measure, and require a
longer time frame and a more focused assessment provided by evaluations.

A project/programme usually monitors a variety of things according to its specific


informational needs. Table 1.1 provides a summary of the different types of monitoring
commonly found in a project/programme monitoring system. It is important to remember
that these monitoring types often occur simultaneously as part of an overall monitoring
system.

Table 1.1 Common Types of Monitoring

11 Addis Ababa University/School of Commerce


Results monitoring tracks effects and impacts. This is where monitoring merges with
evaluation to determine if the project/programme is on target towards its intended results
(outputs, outcomes, impact) and whether there may be any unintended impact (positive or
negative). For example, a psychosocial project may monitor that its community activities
achieve the outputs that contribute to community resilience and ability to recover from a
disaster
Process (activity) monitoring tracks the use of inputs and resources, the progress of
activities and the delivery of outputs. It examines how activities are delivered – the
efficiency in time and resources. It is often conducted in conjunction with compliance
monitoring and feeds into the evaluation of impact. For example, a water and sanitation
project may monitor that targeted households receive septic systems according to
schedule.
Compliance monitoring ensures compliance with donor regulations and expected results,
grant and contract requirements, local governmental regulations and laws, and ethical
standards. For example, a shelter project may monitor that shelters adhere to agreed
national and international safety standards in construction.
Context (situation) monitoring tracks the setting in which the project/programme
operates, especially as it affects identified risks and assumptions, but also any unexpected
considerations that may arise. It includes the field as well as the larger political,
institutional, funding, and policy context that affect the project/programme. For example, a
project in a conflict-prone area may monitor potential fighting that could not only affect
project success but endanger project staff and volunteers.
Beneficiary monitoring tracks beneficiary perceptions of a project/programme. It includes
beneficiary satisfaction or complaints with the project/programme, including their
participation, treatment, access to resources and their overall experience of change.
Sometimes referred to as beneficiary contact monitoring (BCM), it often includes a
stakeholder complaints and feedback mechanism (see Section 2.2.8). It should take
account of different population groups (see Section 1.9), as well as the perceptions of
indirect beneficiaries (e.g. community members not directly receiving a good or service).
For example, a cash-for work programme assisting community members after a natural
disaster may monitor how they feel about the selection of programme participants, the
payment of participants and the contribution the programme is making to the community
(e.g. are these equitable?).
Financial monitoring accounts for costs by input and activity within predefined categories
of expenditure. It is often conducted in conjunction with compliance and process
monitoring. For example, a livelihoods project implementing a series of micro-enterprises
may monitor the money awarded and repaid, and ensure implementation is according to
the budget and time frame

12 Addis Ababa University/School of Commerce


Organizational monitoring tracks the sustainability, institutional development and capacity
building in the project/programme and with its partners. It is often done in conjunction with
the monitoring processes of the larger, implementing organization. For example, a National
Society’s headquarters may use organizational monitoring to track communication and
collaboration in project implementation among its branches and chapters.

As we will discuss later, there are various processes and tools to assist with the different
types of monitoring, which generally involve obtaining, analyzing and reporting on
monitoring data. Specific processes and tools may vary according to monitoring need, but
there are some overall best practices, which are summarized in the following box.
Monitoring best practices
 Monitoring data should be well-focused to specific audiences and uses (only what is
necessary and sufficient).
 Monitoring should be systematic, based upon predetermined indicators and
assumptions.
 Monitoring should also look for unanticipated changes with the project/ programme
and its context, including any changes in project/programme assumptions/risks; this
information should be used to adjust project/programme implementation plans.
 Monitoring needs to be timely, so information can be readily used to inform
project/programme implementation.
 Whenever possible, monitoring should be participatory, involving key stakeholders –
this can not only reduce costs but can build understanding and ownership.
 Monitoring information is not only for project/programme management but should be
shared when possible with beneficiaries, donors and any other relevant stakeholders.

1.3. What is Evaluation?


This material adopts the OECD/DAC definition of evaluation as “an assessment, as systematic
and objective as possible, of an ongoing or completed project, programme or policy, its
design, implementation and results. The aim is to determine the relevance and fulfillment of
objectives, developmental efficiency, effectiveness, impact and sustainability.” An evaluation
should provide information that is credible and useful, enabling the incorporation of lessons
learned into the decision-making process of both recipients and fund suppliers.

13 Addis Ababa University/School of Commerce


Evaluations involve identifying and reflecting upon the effects of what has been done, and
judging their worth. Their findings allow project/programme managers, beneficiaries,
partners, donors and other project/programme stakeholders to learn from the experience
and improve future interventions.

Fig 1.3: Evaluation questions and the log frame

Fig 1.3 summarizes key evaluation questions as they relate to the logframe’s objectives, which
tend to focus more on how things have been performed and what difference has been made.

It is best to involve key stakeholders as much as possible in the evaluation process. This
includes National Society staff and volunteers, community members, local authorities,
partners, donors, etc. Participation helps to ensure different perspectives are taken into
account, and it reinforces learning from and ownership of the evaluation findings.

There is a range of evaluation types, which can be categorized in a variety of ways. Ultimately,
the approach and method used in an evaluation is determined by the audience and purpose
of the evaluation. Table 2 summarizes key evaluation types according to three general

14 Addis Ababa University/School of Commerce


categories. It is important to remember that the categories and types of evaluation are not
mutually exclusive and are often used in combination. For instance, a final external evaluation
is a type of summative evaluation and may use participatory approaches.

Table 1.2 Summaries of Major Evaluation Types


Approach 1: Evaluation Timing
Formative evaluations occur during project/programme implementation to improve
performance and assess compliance.

Summative evaluations occur at the end of project/programme implementation to assess


effectiveness and impact.

Midterm evaluations are formative in purpose and occur midway through implementation.
For secretariat-funded projects/ programmes that run for longer than 24 months, some
type of midterm assessment, evaluation or review is required. Typically, this does not need
to be independent or external, but may be according to specific assessment needs.

Final evaluations are summative in purpose and are conducted (often externally) at the
completion of project/programme implementation to assess how well the project/
programme achieved its intended objectives. All secretariat-funded projects/programmes
should have some form of final assessment, whether it is internal or external.

Ex-post evaluations are conducted some time after implementation to assess long-term
impact and sustainability.
Approach 2: Who conducts the evaluation
Internal or self-evaluations are conducted by those responsible for implementing a
project/programme. They can be less expensive than external evaluations and help build
staff capacity and ownership. However, they may lack credibility with certain stakeholders,
such as donors, as they are perceived as more subjective (biased or one-sided). These tend
to be focused on learning lessons rather than demonstrating accountability.

External or independent evaluations are conducted by evaluator(s) outside of the


implementing team, lending it a degree of objectivity and often technical expertise. These
tend to focus on accountability. Secretariat-funded interventions exceeding 1,000,000
Swiss francs require an independent final evaluation; if undertaken by the
project/programme management, it should be reviewed by the secretariat’s planning and
evaluation department (PED), or by some other independent quality assurance mechanism

15 Addis Ababa University/School of Commerce


approved by the PED.

Participatory evaluations are conducted with the beneficiaries and other key stakeholders,
and can be empowering, building their capacity, ownership and support.

Joint evaluations are conducted collaboratively by more than one implementing partner,
and can help build consensus at different levels, credibility and joint support
Approach 3: Evaluation Technicality or Methodology
Real-time evaluations (RTEs) are undertaken during project/programme implementation to
provide immediate feedback for modifications to improve ongoing implementation.
Emphasis is on immediate lesson learning over impact evaluation or accountability. RTEs
are particularly useful during emergency operations, and are required in the first three
months of secretariat emergency operations that meet any of the following criteria: more
than nine months in length; plan to reach 100,000 people or more; the emergency appeal is
greater than 10,000,000 Swiss francs; more than ten National Societies are operational
with staff in the field.

Meta-evaluations are used to assess the evaluation process itself. Some key uses of meta-
evaluations include: take inventory of evaluations to inform the selection of future
evaluations; combine evaluation results; check compliance with evaluation policy and good
practices; assess how well evaluations are disseminated and utilized for organizational
learning and change, etc.

Thematic evaluations focus on one theme, such as gender or environment, typically across
a number of projects, programmes or the whole organization.

Cluster/sector evaluations focus on a set of related activities, projects or programmes,


typically across sites and implemented by multiple organizations (e.g. National Societies,
the United Nations and NGOs).

Impact evaluations focus on the effect of a project/ programme, rather than on its
management and delivery. Therefore, they typically occur after project/programme
completion during a final evaluation or an ex-post evaluation. However, impact may be
measured during project/ programme implementation during longer projects/ programmes
and when feasible.

Activity 1
Answer the following questions.

16 Addis Ababa University/School of Commerce


1. Compare and contrast between monitoring and evaluation.
2. Differentiate among the different types of monitoring.
3. Differentiate among the various types of evaluation.

1.4. Key terms and concepts in Monitoring and Evaluation


What are the key terms for M&E?
When discussing the actual practice of M&E there is a number of widely recognized concepts
and terms. These terms have precise meanings and yet are often used in everyday language
in a much looser way. Terminology and definitions are open to variation and debate and can
vary in their specific use from one development organization to another.

Table 1.3 provides some key terms and the generally accepted definitions. It is how the terms
will be used throughout the material. These, combined with the additional information
should provide a good working knowledge of current practice for M&E

Table 1.3: Key M&E Terminology


The resources that will be used including people, money, expertise,
technology and information to deliver the activities/tasks of the
Inputs project/program. It is usual to monitor the inputs and activities
providing information for analysis and ultimately data for an evaluation.
Activities or The actions taken or the work performed as part of an intervention. For
tasks example, the provision of technical advice, training sessions, facilitation
of meetings or events etc. Activities utilize inputs, such as funds,
technical assistance and other types of resources to produce specific
outputs. Essentially activities or tasks are what the project will ‘do’.
Outputs These are the immediate results derived from the activities of the
project. These outputs might be directly experienced by those being
targeted by the intervention e.g. training advice or indirectly through
outputs like reports, mapping of a situation etc.
Outcomes These are the short-term and medium-term results of an intervention’s
outputs, usually requiring the collective effort of partners. Outcomes
represent changes in conditions that occur between the completion of
outputs and the achievement of impact. Reductions in the number of
procedures or cost of registering a business are outcomes from a
business simplification project. It is usual to evaluate outcomes

17 Addis Ababa University/School of Commerce


providing information for analysis and ultimately data for impact
assessment.
Impacts Positive and negative, long-term results/benefits for identifiable
population groups produced by an intervention, directly or indirectly,
intended or unintended. In the case of BEE interventions, impact would
include changes such as such as higher productivity, greater income
and investment levels, and economic growth
Impact Seek to capture impacts that have occurred and ideally to differentiate
Assessment those changes that are attributable to the project/intervention from
other external factors. It can take place throughout the project
program but usually towards or after the end of a project/program and
is undertaken by those not involved in the project implementation
Baselines A set of factors or indicators used to describe the situation prior to a
intervention and act as a reference point against which progress can be
assessed or comparisons made. These are sometimes referred to as
benchmarks.
Indicators or A quantitative and/or qualitative variable that allows the measurement
performance and verification of changes produced by an intervention relative to
indicators or key what was planned. A typical outcome indicator for business
performance simplification is the ‘change in the number of procedures needed to
indicators (KPIs) register a business’.
Targets Indicators are a means by which change will be measured; targets are
definite ends or amounts which will be measured. A target is an explicit
statement of the desired and measurable results expected for an
indicator at a specified point in time. Targets should be expressed in
terms of quantity, quality and time.
Milestones Significant points in the lifetime of a project. A particular point in the
project by which specified progress should have been made.

1.5. What is the purpose of Monitoring and Evaluation?


Monitoring and Evaluation provides government officials, development managers, the
private sector and civil society with better means for learning from past experience,
improving service delivery, planning and allocating resources and demonstrating results as
part of accountability to key stakeholders. Although evaluation is distinguished from
monitoring, they are in fact interdependent (Table 1.4). Monitoring presents what has been
delivered and evaluation answers the question “what has happened as a result of the

18 Addis Ababa University/School of Commerce


intervention?” Impact evaluation is a particular aspect of evaluation, focusing on the ultimate
benefits of an intervention.
Table 1.4: What are Monitoring and Evaluation and Impact Evaluation?
Monitoring  Clarifies program objectives.
Regular systematic collection  Links activities and their resources to objectives.
and analysis of information to  Translates objectives into performance indicators
track the progress of program and sets targets.
Implementation against pre-set  Routinely collects data on these indicators,
targets and objectives. compares actual results with targets.
Did we deliver?  Report progress to managers & alert them to problems.
Evaluation  Analyze why intended results were or were not
Objective assessment of an achieved.
ongoing or recently completed  Assesses specific casual contributions of activities to
project, program or policy, its results.
design, implementation and  Examines implementation process.
results.  Explores unintended results.
 Provides lessons, highlights significant
What has happened as a result? accomplishments or program potential and offers
recommendations for improvement.
Impact assessment  Seeks to capture and isolate the outcomes that are
Assesses what has happened as attributable (or caused by) the program.
a result of the intervention and  Will review all fore-going M&E activities, processes,
what may have happened reports and analysis.
without it - from a future point  Provides an in-depth understanding of the various
in time. causal relationships and the mechanisms through
which they operate.
Have we made a different and  May seek to synthesize, compare & contrast a range
achieved our goal? of interventions in a region, timeframe, and sector.

Monitoring gives information on where a policy, program or project is at any given time (or
over time) relative to respective targets and outcomes. Monitoring focuses in particular on
efficiency, and the use of resources. While monitoring provides records of activities and
results, and signals problems to be remedied along the way, it is descriptive and may not be
able to explain why a particular problem has arisen, or why a particular outcome has occurred
or failed to occur.

19 Addis Ababa University/School of Commerce


Evaluation deals with questions of cause and effect. It is assessing or estimating the value,
worth or impact of an intervention and is typically done on a periodic basis –perhaps annually
or at the end of a phase of a project or program. Evaluation looks at the relevance,
effectiveness, efficiency and sustainability of an intervention. It will provide evidence of why
targets and outcomes are or are not being achieved and addresses issues of causality.

Impact Assessment is an aspect of evaluation that focuses on ultimate benefits. It sets out to
assess what has happened as a result of the intervention and what may have happened
without it. Where possible impact assessment tries to differentiate between changes that
can be attributed to the program from other external factors that may have contributed as
well as examining unintended changes alongside those intended.

1.6. Why is Monitoring and Evaluation important?


Why should we undertake M&E?
Monitoring and evaluating program performance enables the improved management of the
outputs and outcomes while encouraging the allocation of effort and resources in the
direction where it will have the greatest impact. M&E can play a crucial role in keeping
projects on track, create the basis for reassessing priorities and create an evidence base for
current and future projects through the systematic collection and analysis of information on
the implementation of a project.

Until recently, M&E has primarily met donor needs for proving or legitimizing the purpose of
the program by demonstrating the effective use of resources. The LEGITIMIZATION function
demonstrates whether reforms are having the desired effect in order to be accountable to
clients, beneficiaries, development partners and taxpayers for the use of resources. M&E as a
legitimization function – PROVING.

20 Addis Ababa University/School of Commerce


From an impact perspective, it is often necessary to ‘prove impact’ in order to make resource
allocation decisions and to ensure the most effective use of limited resources towards the
goal of increasing prosperity in the developing world. Consequently, there is a need for rigor
in the means of assessing results that can help reveal causality i.e., have programs resulted in
sustainable gains in welfare? Have they reinforced the development of efficient and
transparent markets? Have they increased economic growth and reduced poverty?
Answering these questions is extremely challenging, which are open to the influence of a
wide range of factors. However efforts are being made to adopt more rigorous practices
including the use of systematic, quantitative approaches and analysis.

There is a growing awareness of the need for practitioners to conduct their own evaluation
activities in order to increase understanding of development results, which in turn lead to
increased learning and improving within their organization. This LEARNING function
enhances organizational and development learning to increase the understanding of why
particular interventions have been more or less successful.

Additionally, this understanding informs decision making and potentially improves


performance. M&E as learning function – IMPROVING.

In addition to the benefits gained from undertaking M&E, there are other benefits to be
derived from the way in which M&E activities are undertaken. Using a strong participatory
approach to M&E, with the active engagement of government officials, helps to build,
strengthen and embed local M&E capability and oversight processes. This helps to build a
credible ongoing evaluation capacity in country.

A well-functioning M&E system is a critical part of good project/programme management and


accountability. Timely and reliable M&E provides information to:

21 Addis Ababa University/School of Commerce


1. Support project/programme implementation with accurate, evidence based reporting
that informs management and decision-making to guide and improve
project/programme performance.
2. Contribute to organizational learning and knowledge sharing by reflecting upon and
sharing experiences and lessons so that we can gain the full benefit from what we do
and how we do it.
3. Uphold accountability and compliance by demonstrating whether or not our work has
been carried out as agreed and in compliance with established standards and with any
other donor requirements.
4. Provide opportunities for stakeholder feedback, especially beneficiaries, to provide input
into and perceptions of our work, modeling openness to criticism, and willingness to
learn from experiences and to adapt to changing needs.
5. Promote and celebrate our work by highlighting our accomplishments and achievements,
building morale and contributing to resource mobilization.

1.7. Monitoring and Evaluation and the Project/ Programme cycle


Fig 1.4 provides an overview of the usual stages and key activities in project/programme
planning, monitoring, evaluation and reporting (PMER). We write “usual” stages because
there is no one generic project/programme cycle, as each project/programme ultimately
varies according to the local context and need. This is especially true of emergency
operations for which project/programme implementation may begin immediately, before
typical assessment and planning in a longer-term development initiative.

22 Addis Ababa University/School of Commerce


Fig 1.4: Key M&E activities in the project/programme cycle

There is no one generic project/programme cycle and associated M&E activities. This figure is
only a representation meant to convey the relationships of generic M&E activities within a
project/programme cycle.

The listed PMER activities will be discussed in more detail later in this module. For now, the
following provides a brief summary of the PMER activities.

1. Initial needs assessment. This is done to determine whether a project/programme is


needed and, if so, to inform its planning.

23 Addis Ababa University/School of Commerce


2. Logframe and indicators. This involves the operational design of the project/programme
and its objectives, indicators, means of verification and assumptions.
3. M&E planning. This is the practical planning for the project/programme to monitor and
evaluate the logframe’s objectives and indicators.
4. Baseline study. This is the measurement of the initial conditions (appropriate indicators)
before the start of a project/programme.
5. Midterm evaluation and/or reviews. These are important reflection events to assess and
inform ongoing project/programme implementation.
6. Final evaluation. This occurs after project/programme completion to assess how well the
project/programme achieved its intended objectives and what difference this has made.
7. Dissemination and use of lessons. This informs ongoing programming. However,
reporting, reflection and learning should occur throughout the whole
project/programme cycle, which is why these have been placed in the centre of the
diagram.

1.8. Baseline and end line studies


A baseline study (sometimes just called “baseline”) is an analysis describing the initial
conditions (appropriate indicators) before the start of a project/programme, against which
progress can be assessed or comparisons made. An end line study is a measure made at the
completion of a project/programme (usually as part of its final evaluation), to compare with
baseline conditions and assess change. We discuss baseline and endline studies together
because if a baseline study is conducted, it is usually followed by another similar study later in
the project/programme (e.g. an end line study) for comparison of data to determine impact.

Baseline and end line studies are not evaluations themselves, but an important part of
assessing change. They usually contribute to project/programme evaluation (e.g. a final or
impact evaluation), but can also contribute to monitoring changes on longer-term
projects/programmes. The benchmark data from a baseline is used for comparison later in
the project/programme and/or at its end (end line study) to help determine what difference

24 Addis Ababa University/School of Commerce


the project/programme has made towards its objectives. This is helpful for measuring impact,
which can be challenging.

1.9. Comparing Monitoring, Evaluation, Reviews and Audits


The main difference between monitoring and evaluation is their timing and focus of
assessment. Monitoring is ongoing and tends to focus on what is happening. On the other
hand, evaluations are conducted at specific points in time to assess how well it happened and
what difference it made. Monitoring data is typically used by managers for ongoing
project/programme implementation, tracking outputs, budgets, compliance with procedures,
etc. Evaluations may also inform implementation (e.g. a midterm evaluation), but they are
less frequent and examine larger changes (outcomes) that require more methodological
rigor in analysis, such as the impact and relevance of an intervention.

Recognizing their differences, it is also important to remember that both monitoring and
evaluation are integrally linked; monitoring typically provides data for evaluation, and
elements of evaluation (assessment) occur when monitoring. For example, monitoring may
tell us that 100 community facilitators were trained (what happened), but it may also include
post-training tests (assessments) on how well they were trained. Evaluation may use this
monitoring information to assess any difference the training made towards the overall
objective or change the training was trying to produce.

A review is a structured opportunity for reflection to identify key issues and concerns, and
make informed decisions for effective project/programme implementation. While monitoring
is ongoing, reviews are less frequent but not as involved as evaluations. They are useful to
share information and collectively involve stakeholders in decision-making. They may be
conducted at different levels within the project/programme structure (e.g. at the community
level and at headquarters) and at different times and frequencies. Reviews can also be
conducted across projects or sectors. It is best to plan and structure regular reviews
throughout the project/programme implementation.

25 Addis Ababa University/School of Commerce


An audit is an assessment to verify compliance with established rules, regulations, procedures
or mandates. Audits can be distinguished from an evaluation in that emphasis is on assurance
and compliance with requirements, rather than a judgement of worth. Financial audits
provide assurance on financial records and practices, whereas performance audits focus on
the three E’s – efficiency, economy and effectiveness of project/programme activities. Audits
can be internal or external.

Table 1.5: The key differences among monitoring, evaluations and audits
Particulars Monitoring & Reviews Evaluations Audits
Why? Check progress, Assess progress and Ensure compliance
inform decisions and worth, identify lessons and provide
remedial action, update and recommendations assurance and
project plans, support for longer-term accountability
accountability planning and
organizational learning;
provide accountability
When? Ongoing during project/ Periodic and after According to (donor)
programme project/ programme requirement
Who? Internal, involving project/ Can be internal or Typically external to
programme implementers external to organization project/programme,
but internal or
external to
organization
Link to Focus on inputs, activities, Focus on outcomes and Focus on inputs,
logical outputs and shorter-term overall goal activities and outputs
hierarchy outcomes

1.10. Monitoring and Evaluation Standards and Ethics


Monitoring and Evaluation involves collecting, analyzing and communicating information
about people –therefore, it is especially important that M&E is conducted in an ethical and
legal manner, with particular regard for the welfare of those involved in and affected by it.

International standards and best practices help to protect stakeholders and to ensure that
M&E is accountable to and credible with them. The following is a list of key standards and
practices for ethical and accountable M&E:

26 Addis Ababa University/School of Commerce


1. M&E should uphold the principles and standards of the concerned organizations.
2. M&E should respect the customs, culture and dignity of human subjects - This includes
differences due to religion, gender, disability, age, sexual orientation and ethnicity.
Cultural sensitivity is especially important when collecting data on sensitive topics (e.g.
domestic violence or contraceptive usage), from vulnerable and marginalized groups
(e.g. internally displaced people or minorities), and following psychosocial trauma (e.g.
natural disaster or conflict).
3. M&E practices should uphold the principle of “do no harm”. Data collectors and those
disseminating M&E reports should be respectful that certain information can endanger
or embarrass respondents. “Under this circumstance, evaluators should seek to
maximize the benefits and reduce any unnecessary harm that might occur, provided this
will not compromise the integrity of the evaluation findings”. Participants in data
collection have the legal and ethical responsibility to report any evidence of criminal
activity or wrongdoing that may harm others (e.g. alleged sexual abuse).
4. When feasible and appropriate, M&E should be participatory. Stakeholder consultation
and involvement in M&E increases the legitimacy and utility of M&E information, as well
as overall cooperation and support for and ownership of the process.
5. M&E system should ensure that stakeholders can provide comment and voice any
complaints about the work. This also includes a process for reviewing and responding
concerns/grievances.

1.11. Minimize bias and error


M&E helps uphold accountability, and should therefore be accountable in it. This means that
the M&E process should be accurate, reliable and credible with stakeholders. Consequently,
an important consideration when doing M&E is that of bias. Bias occurs when the accuracy
and precision of a measurement is threatened by the experience, perceptions and assumptions
of the researcher, or by the tools and approaches used for measurement and analysis.

Minimizing bias helps to increase accuracy and precision. Accuracy means that the data
measures what it is intended to measure. For example, if you are trying to measure

27 Addis Ababa University/School of Commerce


knowledge change following a training session, you would not just measure how many
people were trained but also include some type of test of any knowledge change.

Similarly, precision means that data measurement can be repeated accurately and consistently
over time and by different people. For instance, if we use a survey to measures people’s
attitudes for a baseline study, two years later the same survey should be administered during
an end line study in the same way for precision.

As much as we would like to eliminate bias and error in our measurements and information
reporting, no research is completely without bias. Nevertheless, there are precautions that
can be taken, and the first is to be familiar with the major types of bias we encounter in our
work:
a. Selection bias results from poor selection of the sample population to measure/ study.
Also called design bias or sample error, it occurs when the people, place or time period
measured is not representative of the larger population or condition being studied. It is a
very important concept to understand because there is a tendency to study the most
successful and/or convenient sites or populations to reach (which are often the same).
For example, if data collection is done during a convenient time of the day, during the
dry season or targets communities easily accessible near paved roads, it may not
accurately represent the conditions being studied for the whole population.
b. Measurement bias results from poor data measurement – either due to a fault in the
data measurement instrument or the data collector. Sometimes the direct measurement
may be done incorrectly, or the attitudes of the interviewer may influence how questions
are asked and responses are recorded. For instance, household occupancy in a disaster
response operation may be calculated incorrectly, or survey questions may be written in
a way that biases the response, e.g. “Why do you like this project?” (Rather than “What
do you think of this project?”).
c. Processing error results from the poor management of data – miscoded data, incorrect
data entry, incorrect computer programming and inadequate checking. This source of

28 Addis Ababa University/School of Commerce


error is particularly common with the entry of quantitative (statistical) data, for which
specific practices and checks have been developed.
d. Analytical bias results from the poor analysis of collected data. Different approaches to
data analysis generate varying results e.g. the statistical methods employed, or how the
data is separated and interpreted. A good practice to help reduce analytical bias is to
carefully identify the rationale for the data analysis methods.

It is difficult to fully cover the topic of bias and error and how to minimize them. However,
many of the precautions for bias and error are topics in the next section of this module. For
instance, triangulating (combining) sources and methods in data collection can help reduce
error due to selection and measurement bias. Data management systems can be designed to
verify data accuracy and completeness, such as cross-checking figures with other data
sources or computer double-entry and post-data entry verification when possible. A
participatory approach to data analysis can help to include different perspectives and reduce
analytical bias. Also, stakeholders should have the opportunity to review data products for
accuracy.

1.12. The challenges of Monitoring and Evaluation


There are many misconceptions and myths surrounding M&E namely: it’s difficult, it’s
expensive, it requires high level skills, it is time and resource intensive, it only comes at the
end of a project and it is someone else’s responsibility. There is often a sense of frustration
because expectations of M&E activities appear to outstrip resources and skill sets. This might
relate to the context within which M&E is designed, who is responsible for designing the
processes and who is responsible for the analysis. Certainly, evaluating programs is complex,
not least because:
1. It is not always easy from the outset to be clear about what constitutes ‘success.’ For
example, the reduction in the absolute number of regulatory procedures may be less
relevant than reductions in costs and processing times or the number of steps for each
regulation or compliance procedure. Similarly, while ‘time’ taken to comply may fall due
to a reform intervention, it is still feasible that ‘cost’ may increase. This raises questions

29 Addis Ababa University/School of Commerce


over how to value, compare and balance the outcomes of interventions. It will depend
on the context of the reform.
2. The impacts often emerge long after the intervention is completed, and are often several
degrees removed from the ‘inputs’ of a program or intervention. Most assessments and
evaluations are conducted at best within six months of the end of a program, which may
in itself only be of a few years duration - that is often insufficient time to embed changed
attitudes and roles within institutions
3. Business environments can be affected, positively or negatively, by a host of external
factors beyond the influence of projects, such as changing world prices of input factors,
trade reform, health problems in the labor force, fiscal and monetary policy etc.
4. Interventions not typically labeled, such as education improvements, civil service reform,
service delivery improvement, and political reform, can all contribute to increased
economic development, and the impact of these reforms is hard to distinguish.
5. The ‘burden’ of business regulations and regulatory compliance will differ according to
the size of the business, location of the business, and also the sector/activity of the
business. These present complex issues for sample determination and size, ensuring
accuracy and also the aggregation of findings.
6. Doing Business indicators, it should be noted that these may not in fact be the primary
consideration of government/private sector stakeholders or the focus of the project.
Table 1.6: The Challenges of M&E
Contextual challenges
Complexity  Different stakeholders and development partners have different
requirements;
 Requirements change during the life cycle of a program;
 Different donor reporting requirements;
Data  Baselines not conducted;
availability  Limited availability of local, especially current, data;
 Limited disaggregation of data;
 Lack of sample frames;
Attitudes and  Where there are multiple stakeholders it is difficult to engage
collective commitment;
Commitment  Stakeholders may be suspicious about how and why information

30 Addis Ababa University/School of Commerce


will be used, especially if progress is slow or limited
Diversity and  Recognizing issues of diversity and inclusion explicitly;
Inclusion
Design and Analysis challenges
Counterfactuals  How to measure what the outcome would have been if the
reform measure had not been implemented.
Causality and  How to account for complex impact relationships between
program activities, outputs and use of outputs by partner
attribution organizations and eventually their impact on enterprises;
 How to isolate individual reform measures in embedded
programs or multi-donor settings;
Timeframes  Time lags and long gestation periods between activities, outputs
and outcomes.
Diversity and  Capturing issues of diversity;
inclusion  Ensuring inclusion in the evaluation process.
Practical challenges
Cost  Finding funds to undertake robust M&E throughout the program
and not just at the end;
 Ensuring the M&E budget is in proportion to the scale of the
intervention.
Skills and  Coping with a low level of local/internal evaluation skills and
abilities experience;
 Utilizing an appropriate mix of local and external resources;
 Building local capability and capacity for ongoing evaluation
activities and oversight.

How can these challenges be addressed?


While there are challenges for designing and undertaking M&E there are also proven
strategies and tactics that can mitigate these challenges and point ways of overcoming
anticipated challenges. Certainly for programs the scope, scale and timeframes of the
interventions are complex, as are the sets of stakeholders and processes involved. Therefore
as a general rule:

 Firstly, it is important to define realistic expectations for assessments of project


interventions and recognize that learning will come from innovation and practice rather
than thinking and theorizing alone.

31 Addis Ababa University/School of Commerce


 Secondly, that not ‘one size fits all’ and selection of the most appropriate approach,
methodology, techniques and tools is required.
 Thirdly, recognize that discussions about progress towards goals and debates about what
are appropriate indicators can be an instructive part of the planning process.

To that end, an important principle is to ensure that an M&E is considered alongside program
design and assessment and that an M&E system and plan is put in place which clearly
articulates how evaluation will occur throughout the project management cycle. This material
offers strategies and tactics for practical implementation of effective M&E activities that help
address the challenges.

Activity 2
Answer the following questions.
1. Monitoring and evaluation are closely related to planning. In particular, planning needs to
ensure that planned initiatives are evaluation-ready. Explain the main benefits that make
planning worthwhile.
2. Discuss practical guidance on how these norms and principles can be applied throughout
the evaluation process in order to meet the required quality standards and its intended
role.

32 Addis Ababa University/School of Commerce


Summary

Building and sustaining an M&E system is not easy – it requires commitment, time,
continuous effort, resources and ideally a champion to promote and prioritize the
importance of M&E. But it is possible and there is evidence from current practice that
efficient and effective M&E can be undertaken.
 There are a number of key terms to understand and be able to use for M&E work.
Familiarization with the concepts, the strengths and weaknesses takes time, but is a
worthwhile investment;
 M&E are distinct yet interdependent entities that tell us if we are on the right track,
doing the right things, for the right groups of people in the best way possible;
 Once an M&E system is in place the challenge is to sustain it. In this respect M&E
systems are a continuous work in progress;
 There are challenges to designing and implementing effective M&E but current practice
provides strategies and tactics for addressing those challenges.

33 Addis Ababa University/School of Commerce


Self Assessment Questions-1

Dear learner, if you understood this unit very well, attempt the following questions and
evaluate yourself against the answers given at the end of this unit.

Case Analysis
1. Good planning combined with effective monitoring and evaluation can play a major role
in enhancing the effectiveness of development programmes and projects. Explain the
inter-linkages and dependencies between planning, monitoring and evaluation.
2. Like monitoring and evaluation, inspection, audit, review and research functions are
oversight activities, but they each have a distinct focus and role and should not be
confused with monitoring and evaluation. Differentiate them.

34 Addis Ababa University/School of Commerce


Answer Key to Activities and Self Assessment Questions

Activities

Activity 1

1. Monitoring is the periodic oversight of the implementation of an activity which seeks to


establish the extent to which input deliveries, work schedules, other required actions and
targeted outputs are proceeding according to plan, so that timely action can be taken to
correct deficiencies detected where as Evaluation is a process which attempts to
determine as systematically and objectively as possible the relevance, effectiveness,
efficiency and impact of activities in the light of specified objectives. It is a learning and
action-oriented management tool and organizational process for improving current
activities and future planning, programming and decision-making.
Item Monitoring Evaluation
Frequency periodic, , regular episodic
Main action keeping track/ oversight assessment
assessment
Basic purpose improve efficiency adjust work improve effectiveness, impact,
plan future programming
Focus inputs, outputs, process effectiveness, relevance,
outcomes, work plans impact, cost effectiveness
Information routine or sentinel systems, field same, plus surveys, studies
sources observation, progress reports,
rapid assessments
Undertaken by Programme managers Programme managers
community workers supervisors, funders
community (beneficiaries) external evaluators
supervisors, funders community (beneficiaries)
Reporting to Programme managers Programme managers
community workers supervisors, funders policy-
community (beneficiaries) makers, beneficiaries,
supervisors, funders community (beneficiaries)
programme managers
supervisors, funders

35 Addis Ababa University/School of Commerce


2. There are various types of monitoring commonly found in a project/programme
monitoring system. Among others, Results monitoring, Process (activity) monitoring,
Compliance monitoring, Context (situation) monitoring, Beneficiary monitoring, Financial
monitoring and organizational monitoring.
3. There is a range of evaluation types, which can be categorized in a variety of ways. The
key evaluation types according to three general categories include Evaluation timing, who
conducts evaluation and technicality or methodology.

Activity 2

1. The Benefits Of Planning


The Benefits Of Planning
1. Planning enables us to know what should be done when—without proper, projects
or programmes may be implemented at the wrong time or in the wrong manner and
result in poor outcomes.
2. Planning helps mitigate and manage crises and ensure smoother implementation-
There will always be unexpected situations in programmes and projects. However, a
proper planning exercise helps reduce the likelihood of these and prepares the team
for dealing with them when they occur. The planning process should also involve
assessing risks and assumptions and thinking through possible unintended
consequences of the activities being planned. The results of these exercises can be
very helpful in anticipating and dealing with problems. (Some planning exercises also
include scenario planning that looks at ‘what ifs’ for different situations that may
arise.)
3. Planning improves focus on priorities and leads to more efficient use of time, money
and other resources—Having a clear plan or roadmap helps focus limited resources
on priority activities, that is, the ones most likely to bring about the desired change.
Without a plan, people often get distracted by many competing demands. Similarly,
projects and programmes will often go off track and become ineffective and
inefficient.
4. Planning helps determine what success will look like—A proper plan helps individuals
and units to know whether the results achieved are those that were intended and to
assess any discrepancies. Of course, this requires effective monitoring and evaluation
of what was planned. For this reason, good planning includes a clear strategy for
monitoring and evaluation and use of the information from these processes.

36 Addis Ababa University/School of Commerce


2. Norms for evaluation

Norms for evaluation


Evaluation should be:
 Independent—Management must not impose restrictions on the scope, content,
comments and recommendations of evaluation reports. Evaluators must be free of
conflict of interest.
 Intentional—the rationale for an evaluation and the decisions to be based on it
should be clear from the outset.
 Transparent—Meaningful consultation with stakeholders is essential for the
credibility and utility of the evaluation.
 Ethical—Evaluation should not reflect personal or sectoral interests. Evaluators
must have professional integrity, respect the rights of institutions and individuals to
provide information in confidence, and be sensitive to the beliefs and customs of
local social and cultural environments.
 Impartial—removing bias and maximizing objectivity are critical for the credibility of
the evaluation and its contribution to knowledge.
 Of high quality—all evaluations should meet minimum quality standards defined by
the Evaluation Office.
 Timely—Evaluations must be designed and completed in a timely fashion so as to
ensure the usefulness of the findings and recommendations.
 Used—Evaluation is a management discipline that seeks to provide information to
be used for evidence-based decision making. To enhance the usefulness of the
findings and recommendations, key stakeholders should be engaged in various ways
in the conduct of the evaluation.

Self Assessment Question-1


Understanding inter-linkages and dependencies between planning,
1. The inter-linkages and dependencies between planning, monitoring and evaluation.
The inter-linkages and dependencies between planning, monitoring and evaluation
 Without proper planning and clear articulation of intended results, it is not clear what
should be monitored and how; hence monitoring cannot be done well.
 Without effective planning (clear results frameworks), the basis for evaluation is
weak; hence evaluation cannot be done well.
 Without careful monitoring, the necessary data is not collected; hence evaluation
cannot be done well.
 Monitoring is necessary, but not sufficient, for evaluation.

37 Addis Ababa University/School of Commerce


 Monitoring facilitates evaluation, but evaluation uses additional new data collection
and different frameworks for analysis.
 Monitoring and evaluation of a programme will often lead to changes in programme
plans. This may mean further changing or modifying data collection for monitoring
purposes.
The
2. Distinction between monitoring and evaluation and other oversight activities

Distinction between monitoring and evaluation and other oversight activities


 Monitoring can be defined as the ongoing process by which stakeholders obtain
regular feedback on the progress being made towards achieving their goals and
objectives.
 Evaluation is a rigorous and independent assessment of either completed or ongoing
activities to determine the extent to which they are achieving stated objectives and
contributing to decision making.
 Inspection is a general examination of an organizational unit, issue or practice to
ascertain the extent it adheres to normative standards, good practices or other
criteria and to make recommendations for improvement or corrective action. It is
often performed when there is a perceived risk of non-compliance.
 Audit is an assessment of the adequacy of management controls to ensure the
economical and efficient use of resources; the safeguarding of assets; the reliability
of financial and other information; the compliance with regulations, rules and
established policies; the effectiveness of risk management; and the adequacy of
organizational structures, systems and processes.
 Reviews, such as rapid assessments and peer reviews, are distinct from evaluation
and more closely associated with monitoring. They are periodic or ad hoc, often light
assessments of the performance of an initiative and do not apply the due process of
evaluation or rigor in methodology. Reviews tend to emphasize operational issues.
Unlike evaluations conducted by independent evaluators, reviews are often
conducted by those internal to the subject or the commissioning organization.
 Research is a systematic examination completed to develop or contribute to
knowledge of a particular topic. Research can often feed information into
evaluations and other assessments but does not normally inform decision making on
its own.

38 Addis Ababa University/School of Commerce


Unit 2
Frameworks and Indicators for Monitoring and Evaluation

Introduction
Hello dear learner! This is the second unit of the module titled ‘Frameworks and Indicators
for Monitoring and Evaluation’. Successful projects are usually well designed, focused on
their purpose with clearly articulated aims, objectives and actions. The same is true for the
successful assessment of programs and projects. It is important to have a clear framework
and plan of action for Monitoring and Evaluation activities that is incorporated into the
overall project plans. This unit looks at how Monitoring and Evaluation can be effectively
integrated into project planning through the use of tried and tested approaches and the
development of key indicators.

Learning Objectives:

At the end of this unit lesson, you will be able to:

1. Identify the frameworks and systems for the planning and management of projects;
2. Differentiate between the logical framework approach (LFA) and the associated Log
Frame (LF);
3. Describe the basic concept behind Results-oriented approaches;
4. Depict how the logic models and frameworks improve the quality of Monitoring and
Evaluation processes;
5. List out the main types of indicators and targets that are used in evaluation work;
6. Analyze the use of comparable and core indicators.

39 Addis Ababa University/School of Commerce


2.1. The Logical Framework approach
A range of frameworks and systems exist for the planning and management of projects. A
widely used tool in the development community is the logical framework approach (LFA)
and the associated Log Frame (LF), as it is commonly termed, and the underlying program
logic model (PLM).

The Logical Framework Approach:

The Log Frame helps to clarify the objectives of any project, program, or policy and improve
the quality of M&E design. It aids in the identification of the expected causal links – the
‘program logic’ - in the following results chain: inputs, processes, outputs, outcomes, and
impact. It leads to the identification of performance indicators at each stage in this chain,
looks at the evidence needed to verify these indicators as well as the assumptions that
underlie them and the risks which might impede the attainment of results.

The Logical Framework (Rosenberg & Posner, 1979) was developed for the United States
Agency for International Development as a tool to help conceptualize a project and analyze
the assumptions behind it. Since the development of the Logical Framework, it has been
adopted, with various adaptations (GTZ, 1983), by a large number of bilateral and
international development organizations. The Logical Framework has proven extremely
valuable for project design, implementation, monitoring, and evaluation.

The Log Frame is so named because of the logic processes that underpin its creation and
format. This logic is explained and demonstrated through something called the program logic
model. Logic model may also be called Theory of change, Program action, Model of change,
Conceptual map, Outcome map, Program logic. This is a way of thinking about how the
various components of a project relate to each other to achieve impact and meet goals. The
model is illustrated in Figure 2.1. This shows that specified inputs are used in a project to
produce or undertake a series of activities which in turn deliver things such as advisory
services, training, and public awareness campaigns as part of programs and projects.

40 Addis Ababa University/School of Commerce


These activities are intended to result in outputs (including coverage or “reach” across
specified beneficiary groups), such as reports, recommendations, training events, and media
coverage. In turn, these outputs are expected to yield certain outcomes in terms of changes
in knowledge, behavior and performance among beneficiaries in the target population.
Finally, it is anticipated that projects will generate development impacts including such things
as higher productivity, increased income, investment and employment. Many development
partners use some form of the logic model to design, plan and mange their programs.

Figure 2.1: The program logic model

41 Addis Ababa University/School of Commerce


As was seen in the preparation of the project profile, there is a logical interrelationship
between the overall Problem, the Goal, the Specific Objective, the Expected Outputs, and the
Activities. The Logical Framework facilitates an analysis of these interrelationships
rrelationships and their
relationships with the surrounding environment
environment.

Figure 2.2: Sample of Program


rogram logic model

How does the Log Frame help with Project Evaluation?


The LF and its PLM can provide useful frameworks and tools for evaluation work. They can be
used to demonstrate the role of monitoring, evaluation and impact assessment and the

42 Addis Ababa University/School of Commerce


specific points at which M&E should be undertaken in the program or project
implementation.

Figure 2.3: The Place of M&E in the logic model

As Figure 2.3 shows, monitoring work focuses on the progress and tracking of inputs,
implementation of activities and production of outputs where as Evaluation tends to take
place at specific points/stages in a project and permits an assessment of progress over a
longer period of time. The focus is on tracking changes in relation to outcomes (with
reference to objectives) and impact, in terms of the project goals. Also the LF, when
presented in a table-like matrix format can be a useful way of capturing both the content of a
project together with the key components of the M&E plan.

Table 2.1 summarizes a project and its key M&E feature in a systematic way showing:
 what a project intends to achieve;
 what it intends to do to achieve this and how;
 what the key assumptions are in doing this; and
 how the inputs activities, outputs, outcomes and impact will be monitored and
evaluated.

43 Addis Ababa University/School of Commerce


Table 2.1: The Logical Framework Matrix Structure
Program /Project Performance or Sources of Assumptions or Risks
Logic at different Objective Verification
levels Verifiable (SOV)
Indicators (OVIs)
Goal/Overall Project The measures for Sources of What are the external
Objectives: What are judging whether or information and factors needed to
the wider problems not the goal has methods used to sustain the goal
which the Project will been achieved. collect and Achievement? What
help to resolve? This Measures of the report on the are the risks that
is the development extent to which a goal/ overall might prevent this
impact to which the sustainable objectives sustainable
project contributes - contribution to the achievement?
at a national and/or goal has been
sectoral level. made.
Purpose/Objective Measures by which Sources of What are the
Outcome: achievements at information and assumptions
What are the the end of the methods used to and hence risks
expected benefits (or project can be collect and concerning
dis-benefits) and to quantified report on the purpose/goal
whom will they go? indicating that the achieving the linkage i.e.
What improvements purpose has been purpose achievement of the
or changes will the achieved and that project purpose
project bring about? these benefits are towards the project
sustainable. goal or overall
objectives
Project Outputs: The Measures of the Sources of What are the
direct measurable quantity and information and assumptions and
results (goods and quality of methods used to hence risks
services) of the Outputs and the collect and concerning the
project which are timing of their report on output/ Purpose
largely under project delivery. achieving the linkage. What are the
management's project outputs external factors
control. outside of the control
of the project which, if
not present, will
restrict or stop the
project achieving its
purpose
Project Activities: Implementation/w Sources of What are the
The activities or tasks ork program information & assumptions /risks
that need to be targets. methods used to concerning the
undertaken to collect & report activity/ output

44 Addis Ababa University/School of Commerce


accomplish or deliver on project linkage? What external
the identified project activities factors are needed to
outputs. achieve the project
outputs?
Project Inputs: Implementation/w Sources of What are the
The resources ork program information to assumptions /risks
needed to deliver the targets. report on inputs concerning the input/
project activities are needed to activity/ linkages.
(funds, people produce the What
equipment, etc) projects external factors are
activities needed to achieve the
project activities

The matrix includes performance indicators, sometimes called Objective Verifiable


Indicators (OVIs), the Sources of Verification (SoV) for those OVIs, and the assumptions and
risks considered that could work against achieving the objectives.

How will the logic models and frameworks improve the quality of M&E processes?
Using a tried and tested form of LF will not only encourage a clarity of purpose and practice
for project implementation but will also provide the same for the nature and form of project
M&E to be undertaken. Training is often required to promote the effective use of LFs.
However, if used appropriately they provide an opportunity and vehicle for engaging a range
of partners and other stakeholders in a participatory approach to M&E and communicating
intent to a wider audience. There are strengths and weaknesses in any approach. Table 2.2
summarizes those associated with Log Frames.

Table 2.2: The Strengths and Weaknesses of Logical Frameworks.


Strengths Weaknesses
 Clarity of M&E indicators methodology  Of limited value if done in isolation.
and assumptions.  Assumptions of causality may be weak.
 Encourages review of progress and  Can be counter-productive if adhered
taking corrective action. to too rigidly.
 Encourages participative approaches by  Sometimes difficult to accommodate
engaging partners and stakeholders in the unexpected.
clarifying objectives and designing  Needs some training/expertise to

45 Addis Ababa University/School of Commerce


activities. design and use effectively.
 Considerable good practice and  If not updated during implementation,
literature available. can fail to reflect changing conditions.
 Assists in the preparation and
management of operational plans for
M&E.

2.2. Results-oriented approaches


Results-oriented measurement is a project planning and M&E approach developed and used
by GTZ. This approach is a variant to the LF in the sense that it is based on similar logic and
uses some of the same terminology.

Fig. 2.4 Results Chain for Monitoring and Evaluation

However the approach highlights two aspects of M&E activity that are different to standard
LFs:
a) The focus on measuring ‘results’ throughout a project which are described and linked by
a causal impact chain; and
b) How impact is measured and attributed throughout the impact chain.

What are results and impact chains?


GTZ emphasize the use of the term of ‘results’ in their M&E although they do use the LF
terminology of activities, outputs and outcomes. The use of the term results reinforces the
view that benefits can be produced throughout the implementation of a given program and
not just towards the end of the project period. The different results that are derived from the
inputs, activities, outputs, and outcomes of a project are linked through a logical process
called a causal impact chain.

46 Addis Ababa University/School of Commerce


Like a Log Frame, the results-based impact chain also gives attention to activities, outputs,
outcomes and impact. As Figure 2.5 shows, starting from the core problem inputs are used to
launch activities that generate outputs. These are then utilized by target groups or
intermediaries (use of outputs), generating medium-term and long-term development results
i.e. outcomes and impacts.

This results-based impact chain model can also be translated into a matrix similar to the Log
Frame, for project planning and management as is illustrated below.

Figure 2.5: Results-based impact chain

What is the Attribution Gap?


The results-based impact chain is different in one important respect to the traditional LF
approach. It gives explicit acknowledgement of the challenges of attributing cause and effect
(or impact) to a given intervention, attempting to identify when the attribution of impact to
an intervention becomes compromised. The results based impact chain starts the process of
reflecting on the effect of an intervention from the outset and continues to conduct
evaluative review throughout, including the period that would be described as monitoring in
the LF.

47 Addis Ababa University/School of Commerce


Further up the impact chain, external factors that are not directly related to and/or under the
influence of projects and programs being assessed, increasingly come into play and can have
important influences on the changes that occur. At this point it is explicitly acknowledged
that observed changes in project target groups may not be directly attributable to the
project interventions and outputs. The point or level beyond which the results cannot be
directly linked to the intervention and benefits are ‘indirect’ is termed the attribution gap.
The causal impact chain links the outcomes of individual interventions to potential direct and
indirect benefits. ‘Impact’ relating to project goals tends to be seen as something that is
measured at an aggregate level i.e., the point at which there have been a series of related
interventions. The ‘attribution gap’ is contextual, depending on the complexity and scale of
the project being considered and as such can occur at different points in the causal chain.
These subtle but important differences in the way that different development partners view
and capture impact within their M&E frameworks are discussed further in the next section.
Activity 1
Answer the following questions.
1. Elaborate the logical framework approach.
2. When monitoring activities should be carried out?

2.3. Understanding indicators


Putting together a Log Frame or impact chain for a project involves identifying performance
indicators (or OVIs) which are going to help us ‘objectively verify’ whether or not our
interventions have achieved the intended activities, outputs, outcomes and impact.

The fundamental challenge for the program manager is to develop appropriate performance
indicators which measure project performance. These indicators measure the things that
projects do, what they produce, the changes they bring about and what happens as a result
of these changes.

In order to choose indicators, decisions must be made about what to measure. Having the
right indicators underpins effective project implementation and good M&E practice.

48 Addis Ababa University/School of Commerce


Therefore time, effort, debate and thought should be given to their identification, selection
and use.

What is an indicator?
To measure something it is important to have a unit or variable ‘in which’ or ‘by which’ a
measurement is made i.e. an indicator. For example, in BEE work if the aim is to make
registering a business easier, then changes in the time taken and the costs of registering are
useful indicators of whether and how the intervention has made a difference.

What types of indicators do I need?


Firstly, there is need to distinguish indicators for different levels of assessment, that is
monitoring, evaluation and impact indicators. The former (monitoring) concern tracking the
progress of project implementation and primarily relate to inputs and activities. The latter
two (evaluation) relate to measuring the results of the project: the outputs, the outcomes
and ultimately, impact. Each aspect of implementing a project or program has typical types of
indicators illustrating performance at each project level as Table 2.3 shows.

Table 2.3: Typical indicators for different levels of assessment


Level of Generic examples Specific Examples
Indicators
Inputs/  Human resources  Training for officers
 Financial resources  Awareness events for
Activities  Material resources stakeholders
 Training  Mapping exercises
Outputs  Products  Mapping reports
 Recommendations/Plans  Press releases
 Studies/Reports  Written inspection reports
 Legislations drafted  Awareness of various audiences
 Training for stakeholders
 Legislative drafting
Outcomes  Change in knowledge and/or  Positive client feedback
behavior  Reduction in number of steps,
 Improved practices time and cost in a process
 Increased services  Increasing use of mediation

49 Addis Ababa University/School of Commerce


 legislation passed center/one-stop shop
Impact  Increased sales  Increased formalization
 Increased employment  Increased exports/imports
 Increased profitability  Sustainability of mediation
center / one stop shop
 % increase in municipal revenue

Indicators, wherever possible, need to generate consistent measurements. They need to be


selected or constructed so that when different observers measure performance, they will
come to the same conclusion. Different types and aspects of interventions may require
different types of indicators or a combination of indicators.

2.4. Selecting indicators and setting targets


It is important to use both qualitative and quantitative forms of data in your M&E practice
because each can bring a different perspective to the same event or change and act as a
check on the other sources as a means of verification or refute. Table 2.4 sets out the main
types of indicators that are used in evaluation work, how they are used, and some
observations on how they are used.

Table 2.4: Different types of evaluation indicators


Indicator Characteristics and Use Observations
Types
Direct For observable change resulting May simply be a more precise and
from activities and outputs. operational restatement of the
objective.
Indirect Useful when the objective is not May be used instead of or in addition
directly observable e.g. to direct indicators e.g. improved
(proxy) ‘competitiveness’ is not a thing as institutional capacity; where the cost
such but comprises a bundle of of directing measuring may be
performance criteria including an prohibitively expensive. There must
increase in profitability, in be a clear relationship between what
turnover, in range of products, % is being measure and the indicator
sales. being used.
Qualitative A way of measuring levels of Special effort and attention required

50 Addis Ababa University/School of Commerce


participation, attitudinal change, to get real value. It is generally easier
behavioral change; emergence of to measure behavior than feelings so
leadership, access to political need to observe or measure how
processes, evidence of consensus. often things occur e.g. a measure of
e.g. business satisfaction levels, confidence might be how often
attitudes of officials, the someone speaks and the reaction of
experience of women registering the listener.
businesses
Quantitative Can measure frequency, growth Often perceived as more reliable and
rates, prices, e.g. numbers of more useful for comparison as they
laws that need reform or are ‘countable’.
reduction in the cost of customs
fees for exporting or time taken
to register a business.
Process Allows measurement of how Often subjective as means of
things are being done; belief that verification relies on personal
better implementation and real perspective19. Important means of
problems and needs will be addressing diversity and inclusion.
considered; often qualitative.
Cross cutting Often used to describe indicators Will still need to be direct, indirect,
relating to gender, diversity, quantitative or qualitative.
environment.
Formative Set up within a timeframe to be Sometimes used interchangeably
measure during a phase of with milestones.
intervention.
Summative Used to measure performance at Formative and summative are terms
the end. also applied to evaluations.

Process Indicators:
M&E is inevitably focused on results and so what has been achieved tends to be the priority.
However, the process of how results are achieved is often as equally important as the results
themselves. For example, measuring the changes in attitudes and commitment of the front
line officers when reforming business registration procedures may give insight into why the
businesses are still reluctant to register despite decreasing the time and cost of doing so.
Process-related aspects in evaluation can be more difficult to measure as it is harder to
predict when they will occur and who will be involved. Also processes can be experienced

51 Addis Ababa University/School of Commerce


and perceived differently by different stakeholders involved and this needs to taken into
account. However, these different perspectives can be illuminating and important to
consider. Communication is another where process indicators are critical to measuring its
success. The role of communication is increasingly recognized as important, both for
achieving developmental results and sharing knowledge about results with others. As a result
communication strategies are increasingly distinct and explicit components of development
projects and as such need to be evaluated.

Cross-cutting indicators:
The activities of and results arising from development interventions can be experienced and
perceived differently by different stakeholders. Successful M&E take this into account.
Indicators must adequately reflect and capture the views and experiences of different
stakeholders. In considering indicators for different stakeholders, it is important that to
consider and include those who may lose out as a result of the interventions well as those
that will benefit.

Are targets the same as indicators?


The terms indicator and target are often used synonymously, but in fact, there is a subtle but
important distinction. Indicators are the means by which change will be measured and
targets are the ends. For example,

In the former example, determining the success of a reform in registration could be


attributed to any increase in registration no matter how small and over any given number of
days. Targets set the amounts of change to be achieved and measured and the timeframe
within which this will be achieved. So in the example – successful performance will have
occurred if there has been a 5% increase in businesses registering in less than 5 days per
month.

52 Addis Ababa University/School of Commerce


Indicators are more likely to be objective if they include elements of quantity, quality and
time (QQT). They ‘become’ targets when they incorporate all of these aspects. If we look at
some typical output and outcome indicators for a business registration simplification
program we can apply targets.

Table 2.6: Indicators and Targets


Project output indicators and targets Project outcomes indicators and targets
The production of a report with full Number of laws/regulations changed
mapping of existing procedures by month 2 because of reform work by month 10
Target: report on all registration processes Target: At least 25% of those regulations
will be produced and delivered in hard and deemed ‘redundant’ will have been cut by
electronic copy to the team leader by March September 30th 20x7.
31st 20x7.
Reduced cost and time of registration in
Number of trained individuals in technical each process under reform by month 22
workshops by month 10 Target: There will have a been a 50%
Target: At least 40 officers – 10 from each of reduction in time and 25% reduction in cost
the 4 core partners will have successfully of registering a business in X by the
completed the three core workshops by September 30th 20x8.
September 30th 20x7.

Sometimes there is insufficient data to develop targets at the early stages of a project and it
would be a fundamental mistake to do so and make up unrealistic targets. Therefore it is
entirely acceptable to present indicators without targets in an early LF. The important thing is
that the LF includes indicators that measure the elements of change that are likely to happen.
Once approval has been given and the intervention is underway indicators can be checked
with partners and stakeholders and targets can be constructed and agreed.

What makes a good indicator?


Having selected the type of indicators to use with your M&E it is important to check that they
make sense and work in practice. Training manuals and M&E workshops will often use the
mnemonics SMART and SPICED. This is intended as a checklist for assessing the construction
of indicators.

53 Addis Ababa University/School of Commerce


Table 2.7: Criteria of a good indicator
Indicators used for gathering performance information should be…… SMART
S Specific Reflect what the project intends to change and are able to assess
performance;
M Measurable Must be precisely defined; measurement and interpretation is
unambiguous. Provide objective data, independent of who is
collecting data. Be comparable across projects allowing changes to
be compared;
A Attainable Achievable by the project and sensitive to change. Feasible time and
money to collect data using chosen indicators. Available at a
reasonable cost;
R Relevant Relevant to the project in question;
T Time bound Describes when a certain change is expected;
Indicators used when collecting subjective information should be….. SPICED
S Subjective Contributors have a special position or experience that gives them
unique insights which may yield a high return on the evaluator’s time.
What may be seen by others as 'anecdotal' becomes critical data
because of the source's value.
P Participatory Indicators should be developed together with those best placed to
assess them. This means involving the ultimate beneficiaries, but it
can also mean involving local staff and other stakeholders
I Interpretable Locally defined indicators may be meaningless to other stakeholders,
so they often need to be explained.
C Cross-checked The validity of assessment needs to be cross-checked, by comparing
different indicators and progress, and by using different informants,
methods, and researchers.
E Empowering The process of setting and assessing indicators should be
empowering in itself and allow groups and individuals to reflect
critically on their changing situation
D Disaggregated There should be a deliberate effort to seek out different indicators
from a range of groups, especially men and women. This information
needs to be recorded in such a way that these differences can be
assessed over time.

54 Addis Ababa University/School of Commerce


2.5. Using comparable and core indicators
Why does it matter who sets the indicators?
Who sets indicators is fundamental, not only to ownership and transparency, but also to the
effectiveness of indicators chosen. M&E specialists may feel that M&E experts are best
placed to set indicators. In this way, they can be more confident of the construction
achieving the primary purpose of:
 Ensuring the right things are measured – relating the goal and the target group.
 Achieving a means of comparing results – to other projects in a given place and time or
different places and times.
 To be transparent about the basis on which performance is being measured and judged.

Others believe that more appropriate indicators are developed through a participative
process of development with intervention partners and stakeholders. This is likely to achieve
greater ownership of the results f the intervention. The insight of a local view may bring the
added benefits of a greater commitment to collecting the required data, understanding of
the importance of accuracy and timely collection and help to build local evaluation capability
and capacity as noted in previous section.

Ideally, both views can be incorporated. One way of achieving this is to have a set of core or
common or comparable indicators that have been developed by the experts to allow for
cross project and or country comparison and then customized indicators developed through
local participative processes of analysis and design. The definitions of each of those
indicators are given below..

Outputs are closely related to project deliverables. They include recommendations and
amendments to laws and regulations, trainings, and consultations which can be counted.

Outcomes capture the implementation of program recommendations. In the intermediate


term, they relate to evidence of recommendations and action plans being implemented, laws

55 Addis Ababa University/School of Commerce


and regulations amended and passed, organizations improving their operations, and
improved procedures.

In the longer term, outcomes can be viewed from both the government (public welfare) and
the enterprise perspective and are typically seen in terms of reduced steps, time and cost of
gaining the registration, license or permit, or complying with the regulatory procedures. They
can also capture reduced risk through the reduction in delays and reduction in corruption.
This in turn leads to quicker and cheaper registration and increased levels of compliance with
regulatory systems.

The impact of business regulation reforms is the contribution to economic growth in the
formal economy via the improved business enabling environment. Indicators include the
aggregate cost saving enjoyed by businesses through the improved regulatory environment,
productive private sector investments (i.e., foreign direct investments, gross fixed capital
formation) and the number of formal enterprises and jobs (formalization).

In addition to these core indicators, there are additional indicators that might be relevant to
specific types of programs and especially relevant at the outcome and impact levels. For
instance, different industries are usually regulated in different ways. For example, the
chemical industry will involve different legislation and regulations than say those in the
garment sector. Hence industry-specific reforms may include a suite of regulatory reforms in
reference to a particular industry/sector. Additional indicators will need to capture the
outcomes and impact on the industry itself and associated increases in productivity, growth
(for example via exports) and investment.

Table 2.8: Indicators for reform programs


Output Indicators
 Number of entities receiving advisory services;
 Number of media appearances;
 Number of laws/regulations;
 Amendments/codes drafted or contributed to the drafting;
 Number of participants in workshops, training events, seminars, conferences;

56 Addis Ababa University/School of Commerce


 Number of participants reporting satisfied or very satisfied with workshops, training,
seminars, conferences, etc;
 Number of procedures/policies/practices proposed for improvement or elimination;
 Number of reports (assessments, surveys, manuals) completed;
 Number of women participants in workshops, training events, seminars, conferences, etc.
Outcome Indicators
 Average number of days to comply with business regulation;
 Average official cost to comply with business regulation;
 Number of businesses completing a new/reformed procedure in a given jurisdiction;
 Number of entities that implemented recommended changes;
 Number of recommended laws/regulations/amendments/codes enacted;
 Number of recommended procedures/policies/practices that were improved/eliminated;
 Number of reforms resulting from advisory service as measured by Doing Business;
 Number of investor inquiries in targeted sectors.
Impact Indicators
 Number of formal jobs;
 Value of aggregate private sector savings from recommended changes;
 Value of investment/financing facilitated by advisory services;

What are the advantages and disadvantages of core indicators?

Using core indicators has distinct advantages. They provide an objective and comparable
basis for assessing performance and therefore provide a solid foundation for management
decisions. The comparable dimensions mean that core indicators can be used for
benchmarking and facilitating learning within the donor institution and external
stakeholders.

However, there are also challenges and limitations to using core indicators. One of the main
arguments is ‘our situation is different’ and that core indicators do not address country-
specific objectives. They are seen as a very ‘top down approach’ imposed on field offices and
projects and do not promote local stakeholder ownership in projects or their evaluation.

A major issue for some programs is that core indicators, especially for outputs and outcomes,
typically use counting techniques. For example, an outcome for a business regulatory reform
program is the number of revised laws passed. An issue arises when this type of indicator is

57 Addis Ababa University/School of Commerce


used comparatively, perhaps to compare progress in different countries. Does this really
compare like with like?

In one country a major piece of law may need adjustment to reduce cost and time in business
licensing procedures. This could be counted as ‘1’ as an output indicator. In a neighboring
country, the legal framework for business regulations could look quite different, and the
reform intervention in this case has required multiple small legislative changes. But, does this
then compare like with like? What is the magnitude, or ‘quality’ of the indicator? In this
respect, core indicators will only tell some of the story. They are important for developing
benchmarks and for donor oversight of reform interventions. However, they must be
contextualized and complemented by additional customized (or bespoke) indicators and
other monitoring information. This will be discussed in more depth in the forthcoming
sections.

Are core indicators the same as ‘comparable’ indicators?


With the stronger orientation of monitoring systems towards impact and development
results, there has been a strong push by some organizations within the donor community to
develop internationally comparable evaluation indicators.

DB indicators are an extremely important, useful and powerful indicator. However, both their
strengths and limitations must be understood in order for them to be used most
appropriately and to effectively add value to M&E work. Ideally the DB indicators should be
triangulated with primary data and also qualitative indicators and methods to capture
perceptions and experiences of diverse stakeholders as well as the procedures associated
with reforms.
Activity 2
Answer the following questions.
1. What is the significance of Target setting as part of M&E planning? Do targets change?
Explain it.
2. What are important considerations for a monitoring and evaluation plan?

58 Addis Ababa University/School of Commerce


Summary

 The building blocks of a fit-for-purpose M&E for a project consist of a series of logical
steps to demonstrate that the proposed or enacted reform has a means of measurement
known as indicators that are integrated into the planning and management cycle.
 Clarity regarding the purpose and use of an indicator will contribute to the potential for
benchmarking, comparison and cross-checking (or triangulation) of processes and results.
 The Logic model and its associated frameworks is a tried and tested mechanism for
thinking through and presenting an overview of a project and the attendant M&E and IA
process, activities and timescale.
 Indicators are a critical component of effective M&E.
 Indicators are required for each aspect (monitoring, evaluation and impact) and at all
levels of a project (inputs, outputs, outcomes and impact)
 There are several types of indicators - quantitative and qualitative, direct and indirect,
activity and process and representing the diversity of stakeholders – it is likely that a mix
will be required
 Measuring change is costly. However, it is still necessary to ensure that there are
sufficient and relevant indicators to measure the breadth of change and to provide cross-
checking or triangulation.
 The creation of universal impact indicators is being explored with concepts such as
private sector savings and aggregate cost savings.
 There is a wealth of resources (in print and on-line) to help develop skills and insight. Key
texts and references are listed in the Handbook.

59 Addis Ababa University/School of Commerce


Self Assessment Questions-2

Dear learner, if you understood this unit very well, attempt the following questions and
evaluate yourself against the answers given at the end of this unit.

1. Consider any project which may be hypothetical or real and demonstrate by using a
logical framework model.
2. Briefly present the elements and explanation of how the information presented in
Project Profile.
3. Consider any project which may be hypothetical or real and develop alternative Project
profiles.
4. In M&E planning, one of the things that managers have to work out is a set of
indicators. Understandably, questions often arise regarding what indicators are, their
importance and what to consider when choosing them. Explain about indicators, their
types, their importance and eventually, how to select appropriate indicators.

60 Addis Ababa University/School of Commerce


Answer Key to Activities and Self Assessment Questions

Activities

Activity 1

1. Logical Framework /Logframe Approach is a methodology for planning, managing


and evaluating programmes and projects, using tools to enhance participation and
transparency and to improve orientation towards objectives.

2. Circumstance for monitoring activities

Circumstance for monitoring activities


 Monitoring activities should be conducted at key moments during the intervention
that will facilitate an assessment of progress towards the objectives and goal.
 Programmes ideally involve continuous monitoring – or routine collection of data
and information that will allow them to gauge if activities are being implemented
according to expectations, and if barriers or challenges need to be addressed.
 With a series of trainings for example, key monitoring moments should be set after a
certain number of trainings.
 With an awareness-raising campaign, key monitoring moments should be set after
each aspect of planning and implementing the campaign (e.g. determining exposure
to information disseminated through the media after key periods).

Activity 2

1. Target setting is a critical part of M&E planning and responsible project/programme


management. In order to determine variance (the percentage of target reached), it is
necessary to not only measure the indicator but identify beforehand a target for that
indicator. Project/programme teams may hesitate to set targets, afraid that they may
not accomplish them, or sometimes it is just difficult to predict targets. However,
target setting helps to keep the project/programme’s expected results realistic, to plan
resources, track and report progress (variance) against these targets, and to inform
decision-making and uphold accountability. Data collected during project/programme

61 Addis Ababa University/School of Commerce


M&E often leads to reassessing and adjusting targets accordingly. Certainly, such
changes should follow any proper procedures and approval.
2. Considerations for a monitoring and evaluation plan

Considerations for a monitoring and evaluation plan


 Resources: how much money and time will be needed to conduct the activities?
 Capacity: Does the programme/project have internal capacity to carry out the
proposed monitoring and evaluation activities, including analysis of data collected,
or will outside expertise be needed?
 Feasibility: Are the proposed activities realistic? Can they be implemented?
 Timeline: Is the proposed timeline realistic for conducting the proposed activities?
 Ethics: What are the ethical considerations and challenges involved with
implementing the proposed activities, and is there a plan in place for addressing
those considerations?

Self Assessment Question-2

1 Logical Framework (Project Planning Matrix - PPM)


Project Title: Institutional development for crop production Country: Ethiopia
Estimated Duration of Project: 18 months date PPM prepared: September 9, 20x9.
Summary of Objectively Means/Source of Important
Objectives/Activities Verifiable Indicators Verification Assumptions
Goal: Increase the 1. National 1. Ministry of 1. Market prices
domestic supply and production and Agriculture will remain
exports of good exports of Coffee national favorable.
quality crop from and two other production 2. Satisfactory
Ethiopia. priority crops will statistics. marketing
increase by 10% 2. Ministry of infrastructure
between July Trade export will be in place.
2009 - 2012. statistics.
Specific Objectives: 1. Annual increases 1. Ministry of 1. Agricultural
Improve the specific in the number of Agriculture policy will be
production and farmers in Annual survey modified in
marketing services Ethiopia growing of farmers. favor of crops.
available to crop fruit on a 2. Comparison of 2. Crop farmers
producers in Ethiopia. commercial scale. organizational will have access
2. Improved charts and to credit and
institutional number of technical
structure for employees in assistance.

62 Addis Ababa University/School of Commerce


services in credit, key divisions of
technical Ministry of
assistance, Agriculture
research, each year:
nurseries, and 2009, 2010,
distribution of 2011, 2012.
farm inputs. 3. Annual budgets
of Ministry of
Agriculture.
Expected Outputs: 1. Number of 1. Interviews 1. MOA must
1. Improved planting farmers receiving with farmers. prioritize crops
material available. improved planting 2. Ministry of Ag. and facilitate
2. Established material. budget and imports of plant
research. 2. New research annual reports material.
3. Tech-packs for structure and full 3. Published 2. MOA to
Coffee and other staff in operation. documents. restructure
fruit. 3. One tech-pack 4. Interviews research/extensi
4. Effective published each with farmers on divisions.
mechanism for year 2010-2012. 5. Periodic 3. MOA to hire
production and 4. Same as #1. evaluations of graphic arts
distribution of 5. Noticeable staff specialist.
planting material. increase in the members. 4. Extension
5. Well-trained MOA productivity of 6. Annual agents will
staff. MOA staff in reports of coordinate
6. Effective system research and at each farmer closely with
for distribution of nurseries. organization farmer
farm inputs and 6. Three farmer documenting organizations
planting material. organizations volume of 5. Additional
with input supply sales through necessary staff
centers and input outlets. will be hired.
planting material. 6. Complementary
project to
strengthen
farmer
organizations
financed.
Activities: 1. Cost of materials 1. Vouchers. 1. Planting
1. Import/reproduce & transportation - 2. Vouchers, material can be
improved $3000. contracts. imported.
varieties of crops. 2. Cost of inputs - 3. Vouchers, 2. Adequate MOA
2. Research & $6000; technical contracts. staff will be
validation of assistance - 4. Vouchers. assigned to
production/ post- $20,000. 5. Contracts, research.
harvest. 3. Publications - vouchers. 3. Sufficient
63 Addis Ababa University/School of Commerce
3. Prepare/distribute $20,000. 6. Vouchers resources to
tech-packs. 4. Equipment - hire consultants
4. Establish $45,000; materials and editing
pest/disease free - $75,000. service.
nurseries. 5. Technical 4. Full support
5. Train MOA staff in assistance - from MOA,
proper techniques $25,000; per diem allocation of
for production of - $8,000; materials land and staff
planting material. - $7,000. 5. Active
6. Develop 6. Training - $9,000; participation in
distribution travel costs - training of MOA
program through $6,000; materials - staff.
farmer $5,000. 6. Full-time
organizations for managers
farm inputs and TOTAL $229,000. working in three
planting farmer
materials. organizations.

3. Project Profile Model #1


Element Explanation
Title: It relates to specific objective but is more general.
Definition of It is a summary of the problems found in the upper portion of the
underlying problem tree.
problem
Goal It relates to overall strategy and is the same for all projects falling
within the same strategy. It is defined considering all the objectives in
the higher levels of the objective tree.
Specific It is as taken from the top objective.
objective
Expected They are obtained from the lower levels of the objective tree.
outputs
Activities: They are a logical extension of the expected outputs. They are the
specific actions which have a cost element and must be implemented
to achieve the desired outputs.
Estimate of By analyzing each one of the activities it is possible to identify the
costs: goods and equipment, finance, and manpower necessary to
implement each activity. (Project finance does not include costs of
goods, materials, or personnel but only those funds used as cash.)
Manpower inputs are quantified as man-months and their value can be
estimated. Given this breakdown of needs, a preliminary rough
estimate of total costs can be made.
Expected Based on an analysis of the activities and a realistic assumption of the

64 Addis Ababa University/School of Commerce


duration time required to effectively execute all of them.
Executing It is usually the institution or agent most interested in or most capable
agency of executing the project.

4. Hypothetical Project Profile


Project Profile Sample #1
Element Explanation
Title Institutional development for Apple production.
Definition of Due to the absence of disease resistant planting material, poor
underlying cultural practices brought about by the lack of a governmental policy
problem in favor of commercial apple production, and poor institutional
services, apple production in Ethiopia is low.
Goal Increase the domestic supply and exports of good quality fruit from
Ethiopia.
Specific Improve the production and marketing services available to Apple
objective producers in Ethiopia.
Expected 1. Improved planting material introduced for Apple and other
outputs fruits;
2. Research program established to maintain the quality of
planting material;
3. Technical packages developed for the production, post-
harvest handling, and marketing of Apple and other fruits;
4. Effective mechanisms established for the production and
distribution of planting material;
5. Staff in the planting material production unit trained;
6. Effective system implemented for the distribution of farm
inputs.
Activities 1. Import and reproduce improved varieties of planting
material of selected fruits;
2. Research proper production, post-harvest handling and
marketing techniques and initiate validation activities;
3. Prepare and reproduce technical packages for distribution
to extension agents and farmers;
4. Establish pest- and disease-free nurseries for planting
material;
5. Train Ministry of Agriculture agronomists and research
staff in proper techniques for the production of planting
material;
6. Set up the organizational structure through farmers'
organizations for the distribution of farm inputs and
planting material.
Expected This project will have a duration of three (3) years
65 Addis Ababa University/School of Commerce
duration
Estimate of Type expenditure Estimated cost
costs: (US$)  Import of plant materials $ 3,000
 Preparation of tech-packs $ 40,000
 Establishment of nursery $ 120,000
 Training $ 20,000
 Technical assistance $ 25,000
 Miscellaneous $ 21,000
Total $ 229,000
Implementing Ministry of Agriculture
agency
Notes: 1. As pointed out earlier, the nine elements included here represent
the minimum information that should be included in a project
profile. Some persons prefer to include other elements, such as
Justification and Strategy.
2. Often, there are special conditions which might justify the
execution of the project. There might include such things as
changing market trends, positive or negative ecological
conditions, good leadership potential, or availability of
complementary support. Under Justification one should identify
those items which emphasize the importance of the project.
3. Strategy is the description of how the project implementors are
going to achieve the expected outputs identified in the project
profile. In the description of strategy one should answer such
questions as: Who is going to do what? When? and How? The
activities are an essential part of the strategy.

Project Profile Sample #2


Title Improving the productivity and quality of Avocado in Ethiopia.
Definition Avocados are presently (20x8) produced on a very small scale due to
of problem disease problems and market uncertainty. Production is scattered
and/or throughout the country. Farmers tend to let their plants grow naturally
justification with little or no use of chemicals to control pests and disease problems.
Irrigation and windbreaks are generally not good during production.
Access to agricultural credit is difficult for small farmers, & little or no
facilities or equipment are available for proper postharvest handling of
fruit.
Goal Increase the domestic supply and exports of good quality fruit from
Ethiopia.

66 Addis Ababa University/School of Commerce


Specific 1. Improve production/postharvest practices of selected fruit
objective (Avocado) farmers.
2. Facilitate access to agricultural credit and the necessary
infrastructure for the production of good quality Avocado.

Expected 1. A minimum of 50 fruit farmers trained in the proper methods and


outputs techniques of Avocado production.
2. An effective mechanism established for small farmers to access
credit from the Agricultural Transformation Agency.
3. At least 10 irrigated papaya farms in operation.
4. Adequate postharvest handling facilities and equipment operating
in major production zones.

Activities: 1. Training of farmers in proper production and postharvest handling


practices for Avocado;
2. Establishment of credit facility for fruit farmers within the national
Agricultural Transformation Agency.
3. Technical assistance for the design of irrigation systems and
postharvest handling facilities and equipment.

Expected This project will have duration of five (5) years.


duration
Estimate of Type expenditure Estimated cost
costs: (US$)  Training costs $ 25,000
 Fruit production credit $ 3,000,000
 Technical assistance $ 200,000
 Miscellaneous $ 322,000
Total $ 3,547,500
Executing Agricultural Transformation Agency
agency

5. Indicators

What is an indicator?
An indicator is a variable that is normally used as a benchmark for measuring program
or project outputs. It is “that thing” that shows that an undertaking has had the
desired impact. It is on the basis of indicators that evidence can be built on the impact
of any undertaking. Most often, indicators are quantitative in nature, however, in some
few cases, they are qualitative.

Most often indicators are confused with other project elements such as objectives or
targets. Indeed, understandably so. Unlike targets or results which specify the level of

67 Addis Ababa University/School of Commerce


achievement, indicators do not. For example, in a project on access to safe water,
statements such as “an increase in the proportion of households reporting the
consistent use of chlorinated drinking water” or “70% of households reporting the
consistent use of chlorinated drinking water” are not indicators. Rather, an indicator
could be “The proportion of households reporting the consistent use of chlorinated
drinking water.
Importance of Indicators
Indicators are an important for any project, particularly for monitoring and evaluation
purposes. Some of the benefits of indicators are highlighted below.

1. At the initial phase of a project, indicators are important for the purposes of
defining how the intervention will be measured. Through the indicators,
managers are able to pre-determine how effectiveness will be evaluated in a
precise and clear manner.
2. During project implementation, indicators serve the purpose of aiding program
managers assess project progress and highlight areas for possible improvement.
In this case, when the indicators are measured against project goals, managers
can be able to measure progress towards goals and inform the need for
corrective measures against potential catastrophes.
3. At the evaluation phase, indicators provide the basis for which the evaluators will
assess the project impact. Without the indicators, evaluation becomes an
audacious responsibility.

Types of indicators
The three widely acknowledged types of indicators are process indicators, outcome
indicators and impact indicators.
1. Process indicators: are those indicators that are used to measure project
processes or activities. For example, in a Safe Water project, this could be “the
number of chlorine dispensers installed at water points” or “the number of
households that have received training on chlorination of water.”
2. Outcome Indicators: Are indicators that measure project outcomes. Outcomes
are medium impacts of a project. For example, in the case of a Safe Water project,
outcome indicators could be “the proportion of households using chlorinated
drinking water” or “the percentage of children suffering from diarrhea.”
3. Impact Indicators: Are indicators that measure the long term impacts of a project,
also known as the project impact. In the case of the Safe Water project, it could
be, “the prevalence of under 5 mortality.”

68 Addis Ababa University/School of Commerce


Factors to consider when selecting project indicators
Any appropriate M&E indicator must meet particular thresholds. They must be:
1. Precise/Well defined: Probably the most important characteristic of indicators is
that they should be precise or well defined. I other words, indicators must not be
ambiguous. Otherwise, different interpretations of indicators by different people
implies different results for each
2. Reliable: Reliability here implies that the indicator yields the same results on
repeated trials/ attempts when used to measure outcomes. If an indicator doesn’t
yield consistent results, then it is not a good indicator.
3. Valid: Validity here implies that the indicator actually measures what it intends to
measure. For example, if you intend to measure impact of a project on access to
safe drinking water, it must measure exactly that and nothing else.
4. Measurable: Needless to say that an indicator must be measurable. If an indicator
cannot be measured, then it should and must not be used as an indicator.
5. Practicable: In other cases, although an indicator can be measured, it is
impracticable to do so due to the cost or process constraints. An indicator must
be able to utilize locally available resources while at the same time being cost
effective.

Examples of Indicators
The following are some indicators for a climate change adaptation project in community
level which focuses on farmers.

Process indicators
1. No of farmers supplied with drought resistant crops
2. No of community awareness meetings conducted
3. No of wells/dams constructed
4. No of farmers enrolled in crop insurance
5. No. of irrigation systems constructed
Outcome indicators
1. Proportion of food secure households
2. Percentage of malnourished children under-5
Impact indicators
1. Employment rates of the region
2. prevalence of under 5 mortality

69 Addis Ababa University/School of Commerce


Session 2
This session consists of three units.

Unit 3: Baselines & Data for Monitoring & Evaluation


Unit 4: Monitoring, Evaluation and Impact Assessment
Unit 5: The Project Cycle of Monitoring and Evaluation

70 Addis Ababa University/School of Commerce


Unit 3
Baselines and Data for Monitoring and Evaluation
Introduction

Hello dear learner! This is the third unit of the module titled ‘Baselines and Data for
Monitoring and Evaluation’. Effective monitoring and evaluation requires the collection of
baseline data for selected Indicators. These should be updated as the project progresses. The
major challenge is the different types of activity that typically make up a project is usually
coupled with the variability, limited availability and poor quality of available data.

The process of collecting primary data on a routine basis and upgrading the quality of
existing data is often constrained by the costs of both time and finances. Data collection and
analysis require substantial financial resources, technical skills and time, all of which are
typically in short supply in many situations. There is a need to carefully manage which
indicators are measured, the type of data required to assess progress, the availability of this
data, how it will be collected, the frequency and format of monitoring activities (collection,
reporting, workshops, reviews, meetings) and who participates. This unit will look at the
ways of establishing baselines, doing surveys, sourcing and collecting data.

Learning Objectives:

At the end of this unit lesson, you will be able to:


1. Establish baselines against which change can be measured;
2. Enumerate the key features of a good baseline;
3. Access a valuable resource of data for Monitoring and Evaluation work;
4. Identify the main sources of international business environment data;
5. Design a wide range of tools for data collection that can be used in Monitoring and
Evaluation.

71 Addis Ababa University/School of Commerce


3.1. Establishing baselines
Why should I do a baseline survey?
Good monitoring is the foundation upon which evaluation and impact assessment is based.
The most critical element, especially for impact assessment, is the establishment of baselines
against which change can be measured. In previous Section, we defined baseline as: a set of
factors or indicators used to describe the situation prior to a project which acts as a reference
point against which progress can be assessed or comparisons made.

For example, in a project that aims to reform the regulatory procedures for import and
export, an initial assessment of the current procedures and processes must be completed.
This is also the case for business registration, local level licensing, sectoral licensing,
inspections or tax regime reform. There may be a variety of perspectives on what the
situation is and what changes need to happen. A second measurement should occur when
results can or should be expected (e.g after 6 months) following the implementation of the
streamlined process. This measurement is intended to determine whether the changes made
have actually resulted in improvements.

It is worth noting that many performance indicators may display a “J-curve” effect (showing
a decrease prior to an increase) where for example the number of companies registered
initially decreases (because of the weeding out of “dead” companies) or financial
performance deteriorates before improving. Careful tracking of indicators from the early
stages of the reform intervention will allow the capture of the real baseline data. Project
teams will therefore need to ensure that performance is measured from the very inception of
the reform initiative to guarantee that performance targets are met. In order to determine
whether a reform process has been successful, it is necessary to conduct an evaluation by
essentially taking a ‘before’ and ‘after’ snapshot of performance. This aspect of evaluation is
discussed in more detail will be in the forthcoming Section.

Establishing the current or prevailing situation should be part of developing a project


proposal or a project design after approval. Establishing baselines is in fact a typical activity

72 Addis Ababa University/School of Commerce


undertaken as part of project identification where analysis of the problem is undertaken.
Typically in a particular project, it may start with a period assigned to ‘diagnostics’ which
entails detailed analysis (both qualitative and quantitative) of the nature and magnitude of
the problem. This is commonly thought of as part of the implementation activities and is
often funded as a separate activity rather than part of M&E. However, project diagnostics are
also an essential part of the M&E process and should be integrated into the M&E framework
as baselines. The box below looks at the need for a robust baseline.
Why is a robust baseline essential for M&E?
 Quantitative benchmarking of indicators
 Data on hard facts and perceptions
 A framework for monitoring program activities
 An analysis of structural and performance data of sampled enterprises
 A basis for monitoring implemented policy and regulatory reforms of partner
institutions
 Analysis and ranking of actual and perceived business constraints
 A foundation for an impact monitoring system for partners.

What are the key features of a good baseline?


It is important to get baseline data in place as soon as possible, although sometimes
indicators can only be agreed after some initial stakeholder consultation work has been
concluded as discussed in previous section. This can delay getting a baseline established. The
scope of coverage of the baselines can be scaled up or down depending on what data is
available and the budget allocation. As previously noted, the baseline may be closely related
to diagnostic activities within the project. For example, if a mapping of the regulatory process
is undertaken up-front to determine what reforms should be implemented or a time and cost
assessment for a particular regulatory procedure, such as business registration. Current
practice is discussed later in this Section entitled ‘regulatory baselines’.

As discussed in previous section, it is vital to include data on both quantitative and qualitative
indicators aiming to capture the starting points on facts, processes and attitudes. In this
section, we explore the use of a range of primary data collection methods including focus
groups, surveys and one-to-one interviews. It is recognized that comprehensive enterprise

73 Addis Ababa University/School of Commerce


surveys (discussed in later in this Section) are expensive. If the budget is constrained, a series
of well structured focus groups with business representatives acting as key informants for
the private sector can be used to provide an adequate baseline if the information is recorded
in suitable manner. To maximize the value of a baseline, it could also be used to engage
stakeholders in the reform project. Involvement of the private sector and local businesses
and dissemination of baseline results can encourage buy-in to the project.

What type of baselines do I need?


Methodologies and practice for establishing baselines are well established for projects which
focus on reforming business regulations and there is clear good practice for gathering
baseline data which can be adapted according to the nature, scale and context of project.

Regulatory baselines
A regulatory baseline, or regulatory mapping exercise, collects data on the current regulatory
system (which could be for registration, licensing, inspections, taxation, or any other
business regulation). This type of baseline is similar to what is captured in the Doing Business
surveys. However, it will typically not capture the level of detail required by a program team,
especially if the program is focused at the sub-national level, at sector or industry level, or
from the perspective of MSMEs. A thorough regulatory baseline should therefore map out
the regulatory procedure in detail. This will provide the starting point for a rigorous ‘Before
and After’ assessment and is therefore a crucial part of M&E.
Key components of the regulatory baseline
 A legal assessment of official regulations and procedure to compile an inventory of
current relevant laws and regulations.
 A detailed integrated analysis or mapping of the current official framework and
processes for regulatory procedures, including the official cost of the procedures and
the number of steps, based on information and observation from the implementing
regulatory agencies.

Regulatory process mappings can capture the process for different procedures or for the
same procedure but different types or sizes of firm. This task may be done within the
program team, or specialized assistance, for example a combination of international and local

74 Addis Ababa University/School of Commerce


legal experts could be hired. The regulatory baseline is crucial for understanding the nature
of the regulatory process and as noted, is an important aspect of project diagnostics. It is also
a useful tool for defining the nature of the project required and the setting of targets.

Performance baselines
In addition to designating a baseline for the regulatory procedures, it is also important to
gather baseline data on current business regulation performance. For typical regulatory
reform interventions, this could include performance indicators such as (but not limited to):
the number and rate of businesses registered; the number and rate of licenses or permits
issued; the number of inspections conducted during a designated time period; the rate of
compliance (with any annual return requirements) and various rations of numbers tax
registered firm to the amount of tax collected.

The data records will need to be comparable given the range and diversity of business
regulations and their application. In the case of business licenses for example, firms of
different sizes and engaging in different types of business are likely to apply for different
numbers and types of licenses which may have different procedures and requirements. It will
be important to clarify the number of business activities subject to licensing in a particular
country. Following this, it may be appropriate to compile an aggregate performance indicator
which works across these different categories: i.e. the percentage of businesses whose
license applications are not processed within the legally mandated maximum time periods for
each license.

It is worth noting that the ease of compiling business registration data for example will be
highly dependent on the record keeping of the regulatory agencies. If there is limited
computerization, this may require trawling through paper–based registries. If local records
are inadequate, some simple low-cost surveys of local firms could be used to calculate proxy
indicators. This task could be carried out by the program team, a local consultancy or
business graduates could be hired and supervised by international survey experts.

75 Addis Ababa University/School of Commerce


In addition to the direct performance indicator baseline discussed above, it is also useful to
establish a baseline for the operating efficiency of regulatory institutions. Examples include
operating costs (which may be broken down into staff and equipment), fee income,
investment in upgrading and staffing levels, and ratios linking them.

Enterprise baselines
While the regulatory baseline and DB indicators capture the legal structure of business
regulations, they do not capture the perception and experience of businesses subject to
regulation. These are customer-satisfaction indicators. An enterprise baseline is
complementary to a regulatory baseline and will provide first-hand accounts of the
challenges facing entrepreneurs in firms of different sizes and from different sectors which
may not be captured in existing national studies. Data on the experience of processes and
also perceptions can be collected directly from a sample of firms. This is typically referred to
as a Business Climate Survey (BCS) or enterprise survey, and is often used to specifically
capture the perceptions and experience of MSMEs.

An enterprise survey will attempt to measure the costs of bureaucracy in terms of


management use of time and cost, corruption issues (money spent on bribes, informal
payments and facilitation fees), and the level of bureaucracy (cooperativeness of public
servants, degree of satisfaction with public sector services).

Appropriate surveys are costly and logistically not easy to do. But economizing on this could
be a false economy. A sound business climate survey can be a useful, if not a critical,
instrument for strengthening the business reform agenda. The higher cost can be justified by
the multiple use of the survey i.e., beyond being a baseline for M&E purposes.
Benefits of an enterprise survey.
 Provides official cost of the procedures and the number of steps involved in the
process.
 Monitors not only progress of the project with regard to its impact on the business
climates, but can be made available for the public and the use of other development
partners;
 Produces facts for a private-public dialogue and media briefings and feeds them into

76 Addis Ababa University/School of Commerce


the political and civic process;
 Help prioritize facts through empirical cross-checks which can be used for project
steering and political discussions;
 Builds visibility for the donor;
 Build capacity for a new local team;
 Motivates government and stakeholders to the project.

Significant planning is required to design, manage and undertake an enterprise survey. To


update the enterprise baseline, it will be necessary to collect interim feedback from
enterprises on their knowledge and understanding of new or revised regulatory
requirements of procedures, their satisfaction with the reforms, and whether there is still
corruption in the system for regulatory compliance (i.e., through payment of unofficial
transaction costs). A repeat survey should match the conditions of the original baseline
survey to ensure comparability. However, if resources are limited, this data can be collected
using a small-scale ‘satisfaction’ survey of enterprises that completed new procedures in the
last 12 months, a focus group or one-on-one interviews with a sample of firms who have gone
through the new regulatory procedure.

Is it possible to reconstruct a baseline?


The absence of a baseline is a common problem, and evaluators of programs that have been
running for some time may need to reconstruct a baseline. One way of doing this is by
reviewing and analyzing historical data and secondary data.

An alternative method is using a technique called ‘recall’ through qualitative research with
stakeholders. For a business regulatory reform program for example, a sample of businesses
and local authorities could be asked to recall their experiences of the regulatory procedure
and associated costs Recall is potentially valuable but often an unreliable way to estimate
conditions prior to the start of a program. However, research evidence suggests that while
estimates from recall are frequently biased, the direction and sometimes the magnitude of
the bias is often predictable so that useable estimates can be obtained. The utility of recall
can often be enhanced if two or more independent estimates can be triangulated.

77 Addis Ababa University/School of Commerce


Activity 1
Answer the following questions.
1. Compare & contrast the differences between project evaluations & outcome Evaluations.

3.2. Accessing and using secondary data


What is secondary data and should I use it?
Secondary data is a valuable resource for M&E work especially for baselines, and background
information. It is usually available at no cost. It is also useful if a program has already started
and historical data is required, for example information for baselines. Given limited resources,
it is also often counterproductive to overwhelm government agencies with duplicating
efforts of data collection for indicators. Especially where already established international
sources are available and can be readily accessed for both inter-temporal and international
comparisons. On the other hand, care needs to be exercised where national sources are the
primary providers of data, for example, for investment data, business registration, poverty
estimates and the national accounts. Attention needs to be given to establishing that
adequate focus and resources (both local and international) be devoted in developing local
capacity for generating good quality data.

There is also an issue of neutrality. If the implementing government is also responsible for
provision of data there may be a strong case for relying as far as possible on data from
credible international sources which are independent from government. This reference or
comparison will enhance the neutrality and credibility of the assessment. An added
dimension is that a country’s efforts to improve these indicators will send the right signals to
the outside world.

What are the main sources of international business environment data?


There are several sources of secondary information that have the potential to provide good
background and or baseline information for M&E work. Some of these are available on an
international level and others are specific to a particular context. A high ranking on the ease
of doing business index means the regulatory environment is conducive to the operation of
business.

78 Addis Ababa University/School of Commerce


3.3. Collecting and using primary data
What is primary data?
Primary data on the project activities and the stakeholders of these programs at the country
level often either does not exist, is limited in scope, out of date or not easily accessible. In
many countries there are limited records on businesses (their existence, profile, and revenue)
especially for small and micro business. In addition, basic data on income levels and the
experiences of business environment issues such as business registration, formalization and
regulatory compliance is typically unavailable.

The local capacity for collecting, storing and analyzing data may also be limited. Many BEE
reform programs are therefore tasked with collecting this data directly, and increasingly,
working with national organizations to develop this primary data. The local capacity for
collecting, storing and analyzing data may also be limited. Many BEE reform programs are
therefore tasked with collecting this data directly, and increasingly, working with national
organizations to develop this primary data.

What tools are available for data collection?


There are a wide range of tools or instruments that can be used in M&E. Typically more than
one way of collecting data will be used. In some circumstances, especially when looking at
qualitative data, it is sometimes useful to use several techniques to help verify the robustness
of the findings from each. This cross checking is called triangulation.

The key data collection tools for M&E are listed in Table 3.1 with the main features of each
tool listed alongside. This list is not comprehensive, nor is it intended to be. Some of these
tools and approaches are complementary; some are substitutes. Some have broad
applicability, while others are quite narrow in their uses. The choice of which is appropriate
for any given context will depend on a range of considerations. These include the uses for
which M&E is intended, the main stakeholders who have an interest in the M&E findings, the
speed with which the information is needed, and the cost. Different tools/instruments have
strengths and weaknesses as methods of collecting different types of data and their use with

79 Addis Ababa University/School of Commerce


different types of stakeholders, application with different types of indicators and different
target groups.
Table 3.1: Key Tools for Data Collection
Tool/ Description and Key Features Example
Instrument
Sample Collect a range of data through Samples of businesses are
Surveys
questionnaires with a fixed format that are surveyed for data on the time
delivered via the post electronically over and cost of the business
the telephone and face to face interviews. licensing process.
Can be used with a range of subjects such Quantitative data is produced
as households (social-economic survey); a on average time and cost, and
sector (farm management survey); or an perceptions.
activity (enterprise survey). The enterprise survey is a core
example.
Group Collect largely qualitative data through A sample of businesses
interviews/ structured discussions amongst small participate in a focus group and
Focus groups of pre selected participants. provide qualitative feedback on
Groups Usually these groups will comprise no the business licensing process.
more than 12 people and the sessions last
up to 3 hours.
These discussions are managed by an
appointed facilitator who is not a research
participant.
Individual Collect a range of data through face to A business association
interviews face discussions with individual representative or a business
stakeholders often called ‘informants’. registry official provides
These can be "open" interviews or qualitative feedback on the
business licensing process.
"structured" interviews, with
questionnaires as part of a sample survey.
They can vary in time and be held over a
number of sessions.
Often stakeholders who are viewed as
being critical to the success of a project or
program will be selected for interview and

80 Addis Ababa University/School of Commerce


these are often called ‘key informant’
interviews.
Collection of data usually face-to-face A sample of businesses provide
interviews with a particular individual, feedback via an interview on
Case Studies
business, group, location or community on the business licensing process
more than one occasion and over a period at yearly interviews and reflect
of time. on changes in their
The questioning involves open-ended and experiences.
closed type questions questioning and
involves the preparation of ‘histories’.
Rapid A range of tools and techniques developed Program staff attends a
Appraisal originally as rapid rural appraisal (RRA) in business licensing office where
order to develop an instant appraisal in the applications are being
field as the name suggests. It involves the processed and talk directly to
use of focus groups, semi-structured businesses and staff on the
interview with key informants, case process.
studies, participant observation and
secondary sources.
RRA techniques can be used to get views
from a particular constituency of
businesses about a reform measure
Participant Data is collected through observation Program staff reviews
Observation where the researcher takes part in an records from a business
event or attends a place or situation and
licensing office to record the
assesses what is happening through what
they see. elapsed time and cost

May involve some questioning for in a sample of licensing


clarification. Observations may take place applications.
over a period of times through a number
of visits.
Tracer When a range of data collection methods A sample of businesses is
studies are used to collect different types of data tracked over time using a
on an individual group or community to combination of methods cited
determine the effects of an aid above.
intervention over a longer period.

81 Addis Ababa University/School of Commerce


Activity 2
Answer the following questions.
1. Data collection is typically one of the most expensive aspects of the M&E system. How
can you lessen data collection costs to reduce the amount of data collected?

82 Addis Ababa University/School of Commerce


Summary

 Preparing baselines for a project is a significant task that should be started as early as
possible;
 A baseline is an investment in good quality M&E and potentially the sustainability of a
project;
 A good baseline maximizes the use of secondary data in the interest of cost, neutrality
and the potential for comparison;
 A good baseline recognizes that the challenges of collecting primary data can be better
managed if there is clarity about what indicators need to be measured and how this will
improve the quality of M&E and IA;
 Good baselines can be put to multiple use – for engaging stakeholders, communicating
with a variety of audiences and building donor co-operation and/or harmonization;
 There are multiple sources of data – each with their own strengths and limitations. On-
line sources are likely to be more current.

83 Addis Ababa University/School of Commerce


Self Assessment Question-3

Dear learner, if you understood this unit very well, attempt the following questions and
evaluate yourself against the answers given at the end of this unit.

Case Analysis

The M&E plan summarizes data collection methods and tools, but these still need to be
prepared and ready for use. Sometimes methods/tools will need to be newly developed but,
more often, they can be adapted from elsewhere. Illustrate the different data collection
methods for M&E from your experience.

84 Addis Ababa University/School of Commerce


Answer Key to Activities and Self Assessment Questions

Activities

Activity 1

1. Differences between project and outcome evaluations


Differences between project and outcome evaluations
Particulars Project Evaluation Outcome Evaluation
Focus Generally speaking, inputs, activities Outcomes (whether, why and
and outputs (if and how project how the outcome has been
outputs were delivered within a achieved, and the contribution to
sector or geographic area and if a change in a given development
direct results occurred and can be situation).
attributed to the project).
Scope Specific to project objectives, inputs, Broad, encompassing outcomes
outputs and activities. It also and the extent to which
considers relevance and continued programmes, project, soft
linkage with outcome. assistance, partners’ initiatives
and synergies among partners
contributed to its achievement.
Purpose Project based to improve To enhance development
implementation, to re-direct future effectiveness, to assist decision
projects in the same area, or to allow making, to assist policy making,
for up scaling of project. to re-direct future assistance,
to systematize innovative
approaches to sustainable
human development.

Activity 2

1. Means of Minimizing Data Collection Costs


Data collection is typically one of the most expensive aspects of the M&E system. One
of the best ways to lessen data collection costs is to reduce the amount of data
collected. The following questions can help simplify data collection and reduce costs:

85 Addis Ababa University/School of Commerce


 Is the information necessary and sufficient? Collect only what is necessary for
project/programme management and evaluation. Limit information needs to the
stated objectives, indicators and assumptions in the log frame.
 Are there reliable secondary sources of data? Secondary data can save
considerable time and costs – as long as it is reliable.
 Is the sample size adequate but not excessive? Determine the sample size that is
necessary to estimate or detect change. Consider using stratified and cluster
samples.
 Can the data collection instruments be simplified? Eliminate unnecessary
questions from questionnaires and checklists. In addition to saving time and
cost, this has the added benefit of reducing survey fatigue among respondents.
 Is it possible to use competent local people for the collection of survey data?
This can include university students, health workers, teachers, government
officials and community workers. There may be associated training costs, but
considerable savings can be made by hiring a team of external data collectors,
and there is the advantage that local helpers will be familiar with the population,
language, etc.
 Are there alternative, cost-saving methods? Sometimes targeted qualitative
approaches (e.g. participatory rapid appraisal – PRA) can reduce the costs of the
data collection, data management and statistical analysis required by a survey –
when such statistical accuracy is not necessary. Self-administered questionnaires
can also reduce costs.

86 Addis Ababa University/School of Commerce


Self Assessment Question-3

1. Key data collection methods and tools

The following summarizes key data collection methods and tools used in monitoring and
evaluation (M&E). This list is not complete, as tools and techniques are continually
emerging and evolving in the M&E field.
Case study. A detailed description of individuals, communities, organizations, events,
programmes, time periods or a story. These studies are particularly useful in evaluating
complex situations and exploring qualitative impact. A case study only helps to illustrate
findings and includes comparisons (commonalities); only when combined (triangulated)
with other case studies or methods can one draw conclusions about key principles.
Checklist. A list of items used for validating or inspecting whether procedures/steps have
been followed, or the presence of examined behaviors. Checklists allow for systematic
review that can be useful in setting benchmark standards and establishing periodic
measures of improvement.
Community book. A community-maintained document of a project belonging to a
community. It can include written records, pictures, drawings, songs or whatever
community members feel is appropriate. Where communities have low literacy rates, a
memory team is identified whose responsibility it is to relate the written record to the
rest of the community in keeping with their oral traditions.
Community interviews/meeting. A form of public meeting open to all community
members. Interaction is between the participants and the interviewer, who presides
over the meeting and asks questions following a prepared interview guide.
Direct observation. A record of what observers see and hear at a specified site, using a
detailed observation form. Observation may be of physical surroundings, activities or
processes. Observation is a good technique for collecting data on behavioural patterns
and physical conditions. An observation guide is often used to reliably look for consistent
criteria, behaviors, or patterns.
Document review. A review of documents (secondary data) can provide cost-effective
and timely baseline information and a historical perspective of the project/programme. It
includes written documentation (e.g. project records and reports, administrative
databases, training materials, correspondence, legislation and policy documents) as well
as videos, electronic data or photos.
Focus group discussion. Focused discussion with a small group (usually eight to 12
people) of participants to record attitudes, perceptions and beliefs relevant to the issues
being examined. A moderator introduces the topic and uses a prepared interview guide
to lead the discussion and extract conversation, opinions and reactions.

87 Addis Ababa University/School of Commerce


Interviews. An open-ended (semi-structured) interview is a technique for questioning
that allows the interviewer to probe and pursue topics of interest in depth (rather than
just “yes/no” questions). A closedended (structured) interview systematically follows
carefully organized questions (prepared in advance in an interviewer’s guide) that only
allow a limited range of answers, such as “yes/no” or expressed by a rating/number on a
scale. Replies can easily be numerically coded for statistical analysis.
Key informant interview. An interview with a person having special information about a
particular topic. These interviews are generally conducted in an open-ended or semi-
structured fashion.
Laboratory testing. Precise measurement of specific objective phenomenon, e.g. infant
weight or water quality test.
Mini-survey. Data collected from interviews with 25 to 50 individuals, usually selected
using non probability sampling techniques. Structured questionnaires with a limited
number of closed-ended questions are used to generate quantitative data that can be
collected and analysed quickly.
Most significant change (MSC). A participatory monitoring technique based on stories
about important or significant changes, rather than indicators. They give a rich picture of
the impact of development work and provide the basis for dialogue over key objectives
and the value of development programmes (Davies & Dart 2005).
Participant observation. A technique first used by anthropologists (those who study
humankind); it requires the researcher to spend considerable time (days) with the group
being studied and to interact with them as a participant in their community. This method
gathers insights that might otherwise be overlooked, but is time-consuming.
Participatory rapid (or rural) appraisal (PRA). This uses community engagement
techniques to understand community views on a particular issue. It is usually done
quickly and intensively – over a two- to three-week period. Methods include interviews,
focus groups and community mapping.
Questionnaire. A data collection instrument containing a set of questions organized in a
systematic way, as well as a set of instructions for the data collector/interviewer about
how to ask the questions (typically used in a survey).
Rapid appraisal (or assessment). A quick, cost-effective technique to gather data
systematically for decision-making, using quantitative and qualitative methods, such as
site visits, observations and sample surveys. This technique shares many of the
characteristics of participatory appraisal (such as triangulation and multidisciplinary
teams) and recognizes that indigenous knowledge is a critical consideration for decision-
making.
Statistical data review. A review of population censuses, research studies and other
sources of statistical data.

88 Addis Ababa University/School of Commerce


Story. An account or recital of an event or a series of events. A success story illustrates
impact by detailing an individual’s positive experiences in his or her own words. A
learning story focuses on the lessons learned through an individual’s positive and
negative experiences (if any) with a project/ programme
Survey: Systematic collection of information from a defined population, usually by
means of interviews or questionnaires administered to a sample of units in the
population (e.g. person, beneficiaries and adults). An enumerated survey is one in which
the survey is administered by someone trained (a data collector/enumerator) to record
responses from respondents. A self-administered survey is a written survey completed by
the respondent, either in a group setting or in a separate location. Respondents must be
literate.
Visual techniques. Participants develop maps, diagrams, calendars, timelines and other
visual displays to examine the study topics. Participants can be prompted to construct
visual responses to questions posed by the interviewers; e.g. by constructing a map of
their local area. This technique is especially effective where verbal methods can be
problematic due to low-literate or mixed-language target populations or in situations
where the desired information is not easily expressed in either words or numbers.
Etc.

89 Addis Ababa University/School of Commerce


Unit 4
Monitoring, Evaluation and Impact Assessment
[

Introduction
Hello dear learner! This is the fourth unit of the module titled ‘Monitoring, Evaluation and
Impact Assessment’. Monitoring and evaluation are complementary and yet distinct aspects
of assessing the result of a project. The function of monitoring is largely descriptive and its
role is to provide data and evidence that underpins any evaluative judgments. As noted
earlier monitoring is ongoing providing information on where a policy, program or project is
at any given time (and over time) relative to its respective targets and outcomes. The
function and role of evaluation is to build upon monitoring data, bring together additional
information and examine whether or not the project results have been achieved. This unit is
about evaluation – the what, the who, the when and the how questions. It looks at whether
projects have achieved their outcomes (the project ‘purpose’ in logic model terms) and what
has been their impact (meeting the project ‘goal’ in a logic model terms). It addresses how to
implement good evaluation practices with the use of particular analytical techniques.

Learning Objectives:

At the end of this unit lesson, you will be able to:

1. Determine a planning model for an evaluation;


2. Figure out how to ensure the practice of good quality Evaluation;
3. Identify alternative Evaluation techniques;
4. Describe the objectives and approaches of assessing impact;
5. Specify key characteristics for different Evaluation approaches for impact;
6. Set out forthcoming developments in Monitoring and Evaluation.

90 Addis Ababa University/School of Commerce


4.1. Planning an evaluation
What are the key questions for evaluation?
According to the Development Assistance Committee (DAC) of the OECD, “Evaluation is the
systematic and objective assessment of an ongoing or completed project, program or policy,
its design, implementation and results. The aim is to determine the relevance and fulfillment
of objectives, development efficiency, effectiveness, impact and sustainability. An evaluation
should provide information that is credible and useful, enabling the incorporation of lesson
learned into the decision making process of both recipients and development partners. A
comprehensive evaluation therefore typically includes analyzing all five of these criteria. The
definitions of these five together with the type of questions asked for each criterion is
illustrated in Table 4.1.

Table 4.1: Evaluation Criteria, Definitions and Core Questions


Criteria Definitions Core questions
Relevance The extent to which the  Does the intervention address needs?
aid activity and strategy is  Is it consistent with the policies and
responsive to the priorities priorities of major stakeholders?
and policies of the target  Is it compatible with other efforts?
group, recipient and  Does it complement, duplicate or
donor. compete?
Effectiveness The extent to which an aid  Are the desired objectives being
activity attains its achieved at outcome and impact/goal
objectives and the degree level?
to which desired outcomes  Does it add value to
are achieved through  What others are doing?
products and services  To what extent are partners maximizing
provided. their comparative advantage?
Efficiency The operational and  Are we using the available resources
administrative efficiency of wisely and well?
projects and services  What is the efficiency of
provided. communication mechanisms,
knowledge management and
coordination with other agencies?
 How can we measure outputs – both

91 Addis Ababa University/School of Commerce


qualitative and quantitative – in
relation to inputs?
Sustainability Measuring whether the  Will the outcomes and impacts be
benefits of an activity are sustained after external support has
likely to continue after ended?
donor funding has been  Will activities, outputs, structures and
withdrawn. processes established be sustained?
Impact The positive and negative  What changes, positive or negative have
changes produced by a occurred?
Development intervention,  Are these changes attributable to the
directly or indirectly, initiative?
intended or unintended.

Evaluations can be categorized in several different ways according to when they take place,
where they focus and hence what processes they use. The logic model allows for a
systematic and diagnostic review of BEE interventions and links M&E indicators and
processes to stages of the program cycle. The core evaluation criteria can also be linked to
the LF as shown by Figure 4.1. The intention is to assess:
 The extent of compliance and appropriateness of the development partners’ Project
objectives and strategy with its overall goals and mandate;
 The relevance of the development partners’ strategic approach and planned operations
for the planned Project interventions, the management of projects and programs being
delivered;
 The effectiveness of the project activities or the services or technical assistance (TA)
provided, and
 The sustainability of project or investment climate improvements achieved via the
services or TA provided.

92 Addis Ababa University/School of Commerce


Fig 4.1: Core Evaluation within the LF and Project cycle

When is evaluation undertaken?

Usually project evaluation is undertaken in line with donor reporting requirements and
typically takes place at designated stages in the program cycle (often termed mid-term or
project progress review), or immediately after the program intervention is completed (post-
program evaluation or completion reporting). Covering all of the core criteria in all
evaluations may be an ideal but is not always practical. The evaluation may be conducted at
too early a stage to assess impact or sustainability in the longer term.

However, in any evaluation it should always be possible to assess some degree of relevance,
effectiveness and efficiency as minimum criteria. The precise protocols and practices of
when, what and who is involved in undertaking evaluation and in particular assessing the
impact of interventions, varies between development partners and organizations. For the
purpose of this material, the approach for the planning and practice of evaluation is
separated into two distinct but interrelated types of activity differentiated by the timing,

93 Addis Ababa University/School of Commerce


focus and the methodologies used. They are described as review evaluations and assessing
impact as illustrated in Table 4.2.

Table 4.2 Types of evaluation


Review  Focuses on outcomes in terms of effectiveness, efficiency and relevance.
evaluation  Examines whether the activities have delivered the planned outputs and
whether these outputs have in turn led to outcomes that are
contributing to the purpose of the project.
Assessing  Is typically carried out towards or at the end of projects; or after their
Impact completion.
 They usually carried out by those ‘outside’ of the project in an effort to
enhance objective accountability but may also involve insiders in order
to enhance lessons learning.
 Impact evaluations focus on relevance, effectiveness, efficiency,
sustainability in relation to project goals.
 Impact evaluations can also be carried out to assess and synthesize the
outcomes of several initiatives together on a thematic, sector or
program basis to examine their overall impact.

For example, a BEE reform intervention will typically provide various elements of technical
assistance to the government in order to achieve specific outcomes (e.g., new enacted
legislation leading to an improved investment climate), which in turn would lead to impact
(i.e., investment flows, economic growth and employment, and poverty alleviation). The
review and impact evaluations looked at different aspects of the ‘results achieved’ as shown
in Table 4.3.

Table 4.3: Review and impact evaluations


Evaluation Criteria Measuring
Review Program Has the policy/regulatory changes been implemented and
sustained and the investment climate improved.
Outcomes
Impact Program Has the better investment climate increased domestic and
foreign investment, leading to growth and poverty alleviation.
Goals

94 Addis Ababa University/School of Commerce


How do we ensure the practice of good quality evaluation?
In general, a good evaluation should aim to meet the generic quality standards as outlined in
the following Table which relate to what is involved in evaluation, how it is undertaken, when
and by whom. These quality requirements help to ensure that effective and objective
assessment practices are undertaken.

Table 4.4: Quality Standards for Evaluation


Standard Requirement
Utility The evaluation meets the information needs of the intended users and
therefore is relevant and timely
Accuracy The evaluation uses valid, reliable and relevant information
Independence The evaluation is impartial, objective, and independent for the process
concerned with policy-making, and the delivery and management of
development assistance
Credibility The evaluation is undertaken by evaluators with appropriate skills and
experience, is transparent and inclusive
Propriety The evaluation is conducted legally, ethically and with due regard for the
welfare of those involved in the evaluation, as well as those affected by its
results
Cost The costs of evaluation are proportional to the budget committed to the
development intervention being evaluated and remain within the budgetary
beneficial
limits. Resources are used with care

Who should undertake evaluations?


To support these quality criteria, it is important that evaluation activity, especially impact
assessment, should be undertaken by those independent of the project or at least those not
immediately involved in its implementation. Program officers should be involved in designing
the evaluation as well as contributing inputs to the evaluation exercise alongside other
stakeholders, but not undertaking the assessment.

However, evaluations (especially end of project and post-program impact assessment) are
activities that are typically undertaken by independent consultants. They bring specialist
technical expertise and a sense of objectivity to the evaluation, which are two important
criteria for meeting the quality standards noted above. The consultants may come from the

95 Addis Ababa University/School of Commerce


private sector or from organizations such as universities research institutes etc. They may be
locally based within country or come from internationally operating organizations.

The choice of who undertakes the evaluation of a project and how they are selected and
commissioned will depend upon the nature and scale of the project being assessed. The
balance and roles of those internal and external to the project and the practicalities of
planning for commissioning and managing evaluation consultants are discussed further in
forthcoming Section.

Will who does the evaluation affect diversity and/or inclusion issues?
In previous Section, the importance of ensuring that any evaluation work makes provision for
capturing issues of diversity and tries to be as inclusive as possible. Explicit steps need to be
undertaken to ensure that this happens throughout the process of designing and
implementing the evaluation approach. Consideration should be given to the questions,
which indicators are selected, which target groups are sampled, what research tools are
used, who undertakes the research and when and where research takes place. These
decisions will all influence the degree to which the diversity of stakeholders will be captured
and the level of inclusiveness achieved.

The approach (Table 4.5) is based on the logic model. It does not present a new methodology
or set of indicators but rather emphasizes three elements of impact assessment.
 First, it recommends that impact assessment is brought to the fore in any
project/program planning process and that discussions involve consultation with a wide
group of stakeholders.
 Secondly, it recommends that any ‘cause and effect relationships’ that are assumed to
underpin the proposed project intervention are examined and checked with key
stakeholders as part of an ex ante proposal. It is at this stage that project designers need
to consider impact for a diverse range of groups and in particular how project and
interventions are likely to impact on the disadvantaged groups. The use of analytical

96 Addis Ababa University/School of Commerce


tools such as causal chain analysis and risk assessments should be used alongside
participatory evaluation approaches with different stakeholders.
 Thirdly aligned to the above point the IIAA recommends the adoption of a broader ‘lens’
of factors against which impact should be measured. In particular it recommends that
consideration is given to social equality and environmental issues alongside the more
traditional economic and investment indicators that are held as the primary if not the
only success indicators for most BEE reforms.

Table 4.5: The Integrated Impact Assessment Approach


Stage Tools
Initial  Review of current BEE and economic context
screening  Identification of areas to be reformed
 Definition of strategy and focus for reform
Program Baseline assessment:
design – ex  Review of legislative, policy and regulatory environment
ante  Review of country context and condition
appraisal
 Consultation procedures and stakeholder analysis
 Risk assessment

Program design:
 Determination of policy options that address constraints on the
private sector and project.
 Selection of impact indicators – social, economic, institutional
environmental
 Conduct causal chain analysis, assess impact significance
 Develop scenarios
Program Establish monitoring system and ongoing monitoring
Implementation  Focus groups and panels
 Point of delivery surveys, score cards
 Phone surveys
 Mid-term assessment
Program Output-to-purpose review or purpose -to-goal review
review – ex  Comparison of actual impacts and baseline
post  Evaluation of implementation and performance
evaluation  Determine quality of ex-ante assessment

97 Addis Ababa University/School of Commerce


These recommendations and the framework set the agenda for a shift in approach within
M&E but it does not prescribe or include a set of core indicators and practices for
implementation.

4.2. Evaluation techniques


What is the starting point?
Undertaking evaluation involves a distinct set of actions requiring specific methods and
techniques. DFID in their guidance to officers on project and program evaluation present
these as an analytical process of evaluation as shown in Figure 4.1.

Fig 4.1: The evaluation process


The Program officer should:
1. Take into consideration the broad criteria for the project.
2. Combine these with the key indicators identified for the project.
3. Identify clear questions to be addressed by the evaluation.
4. Make these evaluation questions operational by turning them into evaluation
instruments for data collection.
5. Identify the sources of different data to be used in the evaluation; and

98 Addis Ababa University/School of Commerce


6. Agree the ‘success rating criteria’ that will be employed in analyzing the findings from
the data collection and the basis on which conclusions and recommendations are
made.

Which questions should an evaluation prioritize?


An evaluation cannot answer every question that various stakeholders want answered,
without becoming burdensome and too time-consuming for those being evaluated and too
expensive for those undertaking it. It is important to focus on a set of key questions
regarding the output, outcome and impact indicators identified in the Log Frame or plan.
These should be set against the core evaluation criteria outlined above.

What data and information are needed to answer these questions?


Typically evaluation involves using and collecting qualitative and quantitative data sourced
from the ongoing monitoring activities of the project, as well as data obtained directly by the
evaluation or review team. Many of the data collection techniques used in evaluation are the
same as those that will be used for monitoring, namely: observation, record analysis,
interviews and focus groups, questionnaires and surveys. Those more relevant for evaluation
are discussed below.

Using secondary data


Key secondary data sources for review evaluation will typically include documentation both
internal and external to the project.

Table 4.6: Documentation sources


Internal Project documentation such as: project design/memoranda and log
project data frame/ impact chain, monitoring/ supervision reports, review reports and
documents marking critical incidents or activities in the project
implementation. Documents may include key emails as well as more
formal letters, reports as well or press cuttings etc.
External data Reports from partners, other stakeholders, government
agencies/departments, research institutes, other development partners,
newsletters website notices etc. Statistics from government department
and agencies can be critical as background data & providing benchmarks.

99 Addis Ababa University/School of Commerce


Using primary data
In addition to secondary information most evaluations, especially impact evaluations, will
involve some form of primary data collection i.e. data specifically collected for the purpose of
the evaluation exercise. Evaluation is usually trying to record the three things:

 Capturing quantitative changes in conditions and circumstances relating to the project


e.g. the reduction in steps, time and money to register a new business.
 Capturing more qualitative changes in opinions, satisfaction rates, attitudes, e.g. the
perceptions of businesses, and of implementing agencies to changes.
 Capturing process issues such as critical incidents and events that have occurred
throughout, e.g. the engagement of the business associations in reviewing a reform, the
ability of a business association to represent the views of its members, the development
of a Public Private Dialogue (PPD) process to improve the quality of the project.

Data collection techniques and tools


Not all techniques are suitable for collecting these different types of data as the following
Table. Data collection techniques must be chosen that are appropriate for the particular
research question.

Table 4.7: The strengths and weaknesses of different data collection tools
Method Criteria Surveys Rapid Participant Case Focus
appraisal observation studies groups
Coverage - scale of applicability High Medium Low Low Low
Representative High Medium Low Low Low
Ease of quantification High Medium Low Low Low
Ability to isolate /measure non- High Low Low Low
project causes of change
Speed of delivery Low High Medium High High
Expense of design and Delivery High Medium Medium Low Medium
Ease of quantification High Low Medium/ Low Low
Low
Ability to isolate and High Low Low Low Low
measure non-project causes of
change

100 Addis Ababa University/School of Commerce


Ability to cope with the High Medium Low Low Medium
attribution problem
Ability to capture qualitative Medium High High High High
info
Ability to capture causal Low High High Medium Medium
Processes
Ability to understand Minimal Medium High Medium Medium
complex processes - e.g.
institution building
Ability to capture Medium High Medium Low Medium
diversity of perceptions
Ability to elicit views of diverse/ Medium Medium High if High if Medium
disadvantaged targeted targeted
groups
Ability to capture Low High High High High
unexpected impacts
Degree of participation Medium High Medium Medium High
encouraged by method
Potential to contribute to Medium High Low Medium High
stakeholder capacity to low
building

For example, where the changes in the time, duration and cost of regulative compliance are
of interest, then it is valuable to survey a large representative sample of businesses
experiencing these regulations. The focus is to capture experiences of compliance in terms of
consistent measurable terms such as such as frequency, time and cost.

Can data collection tools be combined?


Evaluation usually involves using a number of different data collection tools to obtain a range
of quantitative and qualitative information about the outcomes and impact of a project. For
example, surveys may be complemented by FG discussions and a small number of detailed
case studies as well as in-depth interviews with key informants. This performs a checking role
or triangulates the information collected by combining multiple data sources and methods. In
this way, this can help to overcome the bias that comes from only using one source and
method of data collection.

101 Addis Ababa University/School of Commerce


Using triangulation
Triangulation means compensating the use of single data collection methods and a simple
study design with the use of several information sources and different methods
simultaneously, to generate information about the same topics. For instance, information
from a survey may be supplemented with general experience data from similar
interventions, and interviews with a variety of key informants to provide contextual
information. In this way the strengths of one methodology can be used to correct or
overcome the weaknesses of another and vice versa.

In a situation that affects several parties with different interests, representatives of all
parties, as well as some neutral respondents, should be interviewed. This provides a
triangulation effect that largely helps to verify information, cuts through conflicting
evidence, and reveals insights, in a cost-effective way

What is a tracer study?


Triangulation is a primary feature of enterprise tracer studies. This is where businesses are
tracked over a period of time using a series of different data collection methods. This might
include using a regular survey as the core tool and combining it with in-depth discussions
with a sample of those surveyed and interviewing key informants on particular key issues.

How should assessment criteria be applied to data?


Assessing project outputs and outcomes from the data that has been brought together
during the evaluation process involves analysis and judgment about benefits and success.
Such analysis typically involves a wide range of activities, including appraisal, assessment,
examination, judgment, and rating, reviewing, and testing. There are a number of techniques
which can be used to facilitate this process. Two forms of assessment have been outlined as
examples – performance scoring, and assessing cost effectiveness through quantitative
analysis.

Performance scoring
Some organizations use scoring systems as an integral part of the review process to rate
aspects of performance; for example, the likelihood that the outputs and outcomes of the
project will succeed (or have succeeded, depending on when the scoring is done). Annual
scoring can provide important data for accountability, learning and decision making. With

102 Addis Ababa University/School of Commerce


care it may be possible for scores to be aggregated across a program or sector to provide an
overall picture of success and value for money. The quality of scoring is clearly a key issue;
since bad data will generate bad conclusions. The system has to be consistently and robustly
applied involving relevant stakeholders and partners. A typical scoring system uses a scale of
1-5 that can be applied for each output, for all outputs collectively, and at the outcome level.
This is illustrated in the following Table.

Table 4.8: Sample performance scorecard


No. Descriptions Achievement
1 Likely to be completely The outputs / outcome are well on the way to completion
achieved (or completed).
2 Likely to be largely There is good progress towards outcome completion and
achieved most outputs have been achieved, particularly the most
important.
3 Likely to be partly Only partial achievement of the outcome is likely and/or
achieved achievement of some outputs.
4 Only likely to be Very limited achievement of outcome and some outputs
achieved to a very is likely.
limited extent
5 Unlikely to be achieved No progress on outputs or outcomes
6 Too early to judge It is impossible to say whether there has been any
progress towards the final achievement of outputs or
outcome. This score should not be used unless they meet
at least one of the following criteria:
a) Postponement of project b) External constraints & or
b) Recruitment delay

Such a scoring system could be used as part of a FG discussion with enterprises or


government officials to help gauge their opinions about whether proposed changes in the
regulations would be achieved.

Scoring systems are particularly useful for ‘process-oriented’ project interventions, such as
regulatory governance or PPD initiatives. For example, PPD forums have been asked to assign
a score from one to five to monitor government progress on project proposals. This can be
presented visually, as illustrated in figure 4.3.

103 Addis Ababa University/School of Commerce


Fig 4.3: Scorecard for government accountability

Another useful tool – the evaluation wheel - has been developed to rate, analyze and present
performance on 12 aspects of activities (see figure 4.3). By plotting scores for each of these
aspects along the spoke of the wheel, the ‘shape’ of performance for each dimension of the
work can be observed and discussed. Each aspect on the wheel has associated indicators for
measurement and a scoring system (from 0 = not satisfied to 5 = very satisfied) enabling the
cross checking of data on similar aspects of the wheel.

Fig 4.4: Evaluation wheel for presenting performance of process indicators

104 Addis Ababa University/School of Commerce


The process indicators include scoring the existence of a mission statement and the ability to
explain its content; the degree of participatory decision making; quality of management
arrangements; quality and frequency of communication contribution made to conflict
resolution; degree of autonomy from development partners.

Assessing cost effectiveness through quantitative analysis


Increasingly development partners are being asked to consider the cost effectiveness or
efficiency of their interventions. Efficiency is an economic performance term comparing
project outputs against the inputs. It illustrates the relation between means and ends and
considers what extent the costs of a development intervention be justified by its results,
taking into account alternatives; whether the intervention represents the quickest and/or
cheapest way to transform investment into development gains, whilst minimizing
unnecessary transaction costs

Cost Benefit Analysis (CBA)


Cost benefit analysis (CBA) is a major evaluation instrument for projects with measurable
benefits. For example, in business registration simplification, a CBA could consider whether
the costs involved in providing technical assistance and support represent good value
compared to the benefits gained through quicker and cheaper registration procedures.

This raises the question of what standards to adopt as a reference point. The standard will
sometimes be predetermined and will in other cases depend either on the terms of reference
given to the evaluation team or the evaluator’s own professional judgment. In its simple
form, CBA is carried out using only financial costs and financial benefits. For example, a
simple cost benefit ratio for a road scheme would measure the cost of building the road, and
compare this to the economic benefit of improving transport links. It would not measure
either the cost of environmental damage or lower congestion or encouragement of new
business activity attracted by improved transport links. The CBA analysis depends on the
timeframe of the costs and benefits being examined.
 Costs are either one-off, or may be ongoing.

105 Addis Ababa University/School of Commerce


 Benefits are most often received over time.

It is important to build this effect of time into the analysis by calculating the net present value
including a discounted rate over time to reflect the opportunity cost of using resources. CBA
of a project or program can become an extremely complex exercise if all of the variables are
considered, especially where the non-financial variables are many and difficult to quantify. A
more sophisticated approach to building a cost benefit model is to try to put a financial value
on intangible costs and benefits. This can be highly subjective.

A different form of cost benefit quantification exercise can be undertaken using the results
from an enterprise survey to estimate the saved costs to the average business, and from this
extrapolating the total savings to the economy as a whole. In effect the economic impact.
Undertaking CBA as part of project evaluation can be useful but it is important to note that
this technique has both advantages and limitations.
Advantages Limitations

 A powerful, widely-used tool for  CBA can only be carried out reliably by
estimating the efficiency of programs using financial costs and financial
and projects. benefits. If intangible items are included
 It can be used to help look at the ex-post within the analysis an estimated value is
impact of an intervention – did the required for these. This inevitably brings
investment generate the benefits an element of subjectivity into the
(savings or returns) predicted or process.
expected  Fairly technical, requiring adequate
 Can be useful tool for ex ante assessment financial and human resources.
when deciding whether to go forward  Requisite data for cost-benefit
with a project - does it look as if it will calculations may not be available, and
generate sufficient benefits to justify projected results may be highly
going ahead? dependent on assumptions made.
 Where costs or benefits are paid or  Results must be interpreted with care,
received over time, it is possible to particularly in projects where benefits are
calculate the time it will take for the difficult to quantify.
benefits to repay the costs.

What other resources are there on evaluation?

106 Addis Ababa University/School of Commerce


The above discussion presents some tools that are relevant for many projects. However
there are a wide range of different data collection and assessment techniques and tools
available for evaluation work.
Activity 1
Answer the following questions.
1. Ideally there would also be an opportunity to discuss and analyze data in a wider forum,
including other project/programme staff and management, partner organizations,
beneficiaries and other stakeholders. Analysis of monitoring data can be undertaken by
those who collect the data where as for evaluation data, analysis will depend on the
purpose and type of evaluation. However, whenever possible, it is advisable to involve
multiple stakeholders in data analysis. . Briefly explain the benefits of involving multiple
stakeholders in data analysis.
2. Is an M&E plan worth all the time and effort?

4.3. Impact Monitoring & Assessment


"Impact Monitoring and Assessment" (IMA) is considered part of a project's process of self-
evaluation, an instrument of reflection and learning to better adapt project activities to a
changing context. IMA comprises two aspects: observation (monitoring) and interpretation
(assessment) of the changing context and the project's implications. Only a combination of
both aspects provides a useful instrument for quality control in project cycle management.
Monitoring should be done "objectively" to establish an information base. Assessment
involves the "subjective" judgment of different stakeholders in accordance with their
individual perceptions.

Approach to Impact Monitoring & Assessment


To what extent has a development project achieved its purpose and reached its goal? While
trying to conduct all planned activities and achieve expected results, it is easy to lose sight of
the goal. Indeed, in the view of many donor agencies, projects focus too strongly on
functioning and performance (efficiency) and not enough on its context (effectiveness). It is
important not only to ask, "Are we doing things right?" but also, "Are we doing the right
things?"

107 Addis Ababa University/School of Commerce


Development agencies justify their actions in terms of impact on the context, and projects
justify themselves through good performance. Theoretically, both aspects – performance and
impact – are included in project cycle management. On the one hand, the context is
represented in the formulation of the project purpose and an overall goal, such as
"empowerment", "poverty alleviation", "sustainable land management", etc. On the other
hand, performance is expressed in the expected results. In practical terms, however, the
impact is often not sufficiently addressed. From a donor's perspective, therefore, a shift of
paradigm is necessary – from performance towards impact, and from efficiency towards
effectiveness.

Project cycle management (PCM) already offers basic instruments but requires
supplementary tools that give more emphasis to context and impact. In formulating a goal
and project purpose, planning takes a wider view of the project's context. Concrete results
and activities are then defined to fulfill the purpose and contribute to the goal. But in
contrast to planning, M&E focuses mostly on the outputs – i.e. the performance – of a project
(result level). Therefore, it should be supplemented by impact monitoring and assessment
(IMA), in order to restore the wider view of the context present during planning.

Six Steps in Impact Monitoring & Assessment


How to Initiate IMA?
 If you are about to design and plan a project, or if your project is in the orientation phase,
begin with Step 1: Involvement of stakeholders and information management;
 If you are already running a project, begin with Step 3: Formulation of impact
hypotheses.

108 Addis Ababa University/School of Commerce


Fig 4.1 Impact Monitoring & Assessment (IMA) as part of the Project Cycle Management
(PCM)

109 Addis Ababa University/School of Commerce


Step 1: Involvement of Stakeholders and Information Management
Involvement of Stakeholders
Participation is a matter of compromising the various perceptions, attitudes, opinions and
objectives of different stakeholders through negotiations in a real-life context. Stakeholder
diversity means managing conflicting interests but also involves a huge potential of choices
to solve prevailing problems. Therefore, one of the first tasks in project planning is a
stakeholder analysis that can simultaneously be used for Impact Monitoring and Assessment
(IMA).

A project may trigger changes in its context through its outputs. But it is the stakeholders
who actually make the changes through social processes such as learning, adaptation,
rejection, etc. Therefore it is necessary that stakeholders are actively involved in the IMA
procedure from the beginning. Stakeholders bring their deep knowledge and perception of
the context into the analysis of problems and alternatives (Step 2). They provide a large
number of positive and negative impact hypotheses which may otherwise be overlooked by
the project team (Step 3), and they provide local indicators (Step 4). They become actively
involved in observation and data collection (Step 5), and changes in the context cannot be
assessed without them (Step 6). At the end of a project phase, stakeholders provide new
opportunities for improving the project's work.

Information Management
Participatory IMA can only be successful if it is transparent and if the information collected is
relevant to different stakeholder groups. For each group, information must be presented in
an appropriate and understandable form or media. Similarly, the means of communication
and dissemination of information are determined by the needs of each group. Finally,
information must be stored accessibly for everyone who is interested in it.

Table 4.1 Stakeholders and Information Management

110 Addis Ababa University/School of Commerce


The following guiding questions to be answered in a participatory exercise will help to
structure information management:
 Which stakeholders will participate in IMA (local land users, women's associations,
project staff, university students, etc.)?
 What kind of information can they provide (technical, cultural background, etc.)?

111 Addis Ababa University/School of Commerce


 What kind of information do they need / is relevant to them (technical, economic, etc.)?
 Which form of presentation do they prefer (reports, discussions, etc.)?
 What is the best way to communicate and disseminate the information (leaflets, radio
programmes, etc.)?
 How should the information be stored so that it is permanently accessible (databases,
files, etc.)?

Preparation of IMA Documentation


The matrix concerned with "stakeholders and information management" (i.e. the given
matrix) is the first document in the IMA procedure. To make the procedure transparent and
replicable, the entire IMA should be thoroughly documented as well, which should be
prepared already at this stage. IMA documentation will contain information gathered during
each step, for example:
 Who used what arguments during stakeholders' discussions and which decisions were
taken? (Steps 1 and 2)
 Which positive and negative impact hypotheses were formulated? (Step 3)
 Which impact indicators were discussed, which ones were chosen, which indicators
were replaced or modified later on during the IMA process and why? (Step 4)
 Which monitoring methods were chosen, how were they adapted / modified during the
monitoring process? (Step 5)
 Who was interviewed, what was asked and what was observed, when and where? (Step
5)
 How was the information collected, interpreted and judged, and who used which
arguments? (Step 6)

Step 2: Review of Problem Analysis

The Project Context – a Living System


What are the most important aspects or elements in a project context? How are they
interlinked? What role do they play in the context? Is the context moving towards or away
from sustainability? The project context, i.e. its biophysical, socio-cultural, economic,
112 Addis Ababa University/School of Commerce
institutional and political environment should be well understood before a development
operation is initiated. An orientation phase leaves ample time for that. But most projects
have to rely on a rather short problem analysis that is – hopefully – carried out with
stakeholders who know the context well enough. A common method is the problem tree,
which requires the selection of a core problem (the stem), defining causes (the roots) and
consequences (the branches). But focusing on only one problem with linear and causal
relationships is critical.

The elements of a context – i.e. people, institutions, resources, etc. – are highly inter
connected, and not all elements and interrelations are known, even to insiders. Stakeholders
with their different agendas represent an additional degree of uncertainty and
unpredictability.

A problem within such a system (e.g. soil degradation) usually has complex causes and
consequences, and also a "solution" to it (e.g. soil conservation) will create multiple, positive
and negative side-effects. Consequently, a problem cannot be solved with a "repair-shop
mentality", i.e. tackling only the most obvious cause. Because the reactions of a system
cannot be precisely predicted, a project in a rural context cannot be expected to provide
simple solutions. It can only provide various "impulses", such as enhancing co-operation and
training stakeholders, introducing a new technology, etc. in order to stimulate partners to
move the context in a certain direction. And because it is not certain whether these impulses
will finally lead to the desired changes, there is a need to observe and assess the changes
constantly to decide which impulses to give next.

Analysis of the Context


Analyzing a project context is a form of systems or network analysis. It is conducted with
stakeholders to involve a variety of different backgrounds, knowledge and experience. It may
be difficult to agree on a common picture of a context in the short run. But the debate about
different perceptions of the same context helps to avoid predetermined thinking at an early
stage. Analysis of the context can start with development of a flow chart. Important

113 Addis Ababa University/School of Commerce


elements (issues, problems, opportunities) can be the starting point. At the beginning, the
analysis should be broad in order not to miss any important aspect. Besides elements there
are interrelations of different types, e.g. flows of information, energy, nutrients,
dependencies, etc. Written on cards, the elements and their interrelations can be rearranged
and replaced until an agreeable result has been achieved. A flow diagram will be used to
determine important and less important elements, to categories stronger or weaker
interrelations, and finally, to identify possible starting points for project activities. This
discussion, interpretation and conclusions of the network automatically involve impact
hypotheses (Step 3) at a broader context level: Where could the project intervene? What will
happen if it intervenes? Disagreements during discussion only indicate the need for further
clarification. They can be considered as a wealth of alternative development options. While a
problem tree is focused on one core problem and mostly linear relations, the network or
systems analysis is broader and allows complex interrelations. This difference will be essential
for all following steps in IMA, from the formulation of impact hypotheses to impact
assessment. All these steps require a broader view of the context rather than a narrow focus
on a core problem.

Figure 3.2: Network Analysis

114 Addis Ababa University/School of Commerce


Step 3: Formulation of Impact Hypotheses

Starting with the Project Planning Matrix


Is the project context moving towards or away from sustainability? What impulses can a
project give towards more sustainable development? What positive and negative impacts
might this imply? Many projects that start with IMA have already completed their planning.
Goal, project purpose, results, activities, indicators, etc. are formulated and compiled, for
example in a project planning matrix. This matrix can be used to initiate IMA for the first time.
The precondition, however, is that the wider project context be taken into consideration.
Therefore, the formulation of impact hypotheses begins with the goal and project purpose.
Later, it may be continued with expected results. Projects that have not yet established a
planning matrix formulate impact hypotheses on the basis of a sound context analysis (Step
2). A participatory network or systems analysis will automatically lead to questions about
where the project could intervene, which elements and interrelations will be involved, what
would happen after an intervention, etc.

Clarifying the Project Goal, Purpose and Expected Results


The formulation of the project goal, purpose and expected results should reflect a situation
to be achieved. In this case, the focus is more likely on the context, and it is much easier to
establish impact hypotheses comprising utilization, effect, benefit / drawback and impact. If
the formulation reflects an activity, the focus is likely to remain on performance.

115 Addis Ababa University/School of Commerce


It is therefore helpful to check and clarify these formulations, to determine whether they
sound like an activity, are formulated vaguely, or contain catchwords which need further
explanation.

Formulating Positive and Negative Impact Hypotheses


Anyone planning a project intends to create positive impacts. But experience shows that
negative impacts are often a by-product of development actions. Because not all elements of
a project context can be considered in the problem analysis (Step 2) and not all possible
changes can be predicted, it is natural that not only intended, but also unintended changes –
both positive and negative – will occur. Not all, but a considerable number of possible
impacts can be foreseen by participatory exercises that formulate impact hypotheses. It is
helpful if stakeholders formulate their hypotheses as an impact chain, which reveals their
views on the mechanisms of change. This would also allow critical inquiry into doubtful
statements. Even if it is not possible to predict everything, the project and its stakeholders
are at least better prepared. And they are in a better position to manage negative issues
when they arise. The mere consideration of negative impacts – besides the positive ones –
during the planning stage is already one big step forward. It is also worthwhile to visualize
impact chains – utilization, effect, benefit / drawback and impact – implicit in stakeholders'
impact hypotheses.

116 Addis Ababa University/School of Commerce


Table 4.2 Positive and Negative Impact Hypotheses

117 Addis Ababa University/School of Commerce


Step 4: Selection of Impact Indicators
What indicates changes in the project context? What reveals which impact hypotheses
materialize? What set of indicators will tell if changes ultimately contribute to achieving the
project purpose and goal? The planning matrix already contains some indicators. Usually,
most of them are output indicators designed to evaluate the project performance. What is
often lacking are impact indicators that represent the context. They will be developed from
the impact hypotheses. The impact chain (utilization, effect, benefit / drawback, impact) can
be of great help during the selection process. An existing indicator may already address one
of these aspects and can thus serve as an impact indicator. Beyond that, additional impact
indicators need to be found.

Tabl2 4.3 Impact Chain and Indicators

118 Addis Ababa University/School of Commerce


The Baseline Dilemma
Indicators not only represent components of a project context; they are also a means of
communication between stakeholders. Thus they must be selected jointly. On the one hand,
it is recommendable to have a set of indicators fixed as early as possible, because it helps to
establish a baseline (reference), particularly for long-term observations. On the other hand,
there are good reasons to take time with the selection. For example, the project context and
the stakeholders cannot be well known and understood in the beginning. During the lifetime
of a project the context and the views of the stakeholders change, and so may the indicators.
Some of the initially selected indicators may become impractical to observe and need to be
replaced. Furthermore, unexpected impacts may require additional indicators at a later stage.
But sound indicator selection only at the end of the project is too late. As a compromise,
several months should be dedicated to a participatory search for a set of impact indicators, to
adapting the initial choice, and to incorporating "emerging" indicators. This is important
because it documents the learning process of a project and its stakeholders. Single indicators
can always be added, but a basic number of indicators should be found, say after six to
twelve months, to ensure long-term monitoring.

Principles of Indicator Selection


The aim of IMA is to achieve a reasonable quality of information in order to find reliable
connections between the project and changes in the context. A representative selection of
indicators and systematic monitoring build the basis for this. But not all indicators that are
identified can be monitored. The project's means, time and resources on the one hand and
the stakeholders' interests in IMA on the other hand, will lead to a final selection of impact
indicators. It should be kept in mind that these indicators are the basis for but not the only
source of valuable information. Systematic monitoring can always be combined with
gathering and documenting information from statistics, newspapers, discussions with
partners, consultants, and informants, one's own observations and the like. There is no need
to wait three years for the first results of the impact monitoring. For example, market prices
of cereals and their fluctuations could also be determined by project staff while shopping for
their families. Negative developments in the agricultural sector will come out during talks in a
119 Addis Ababa University/School of Commerce
village or with colleagues. Such information can always be documented and serve as a
background for an interpretation of changes at a later stage.

The following principles and examples can help to make a definite selection of impact
indicators:

Table 4.4 Principles in Selection of Impact Indicators

Table 4.5 Generic and Local Indicators

120 Addis Ababa University/School of Commerce


Preparing for Impact Assessment
Later, when assessing the results of monitoring in Step 6, changes in the indicators will be
discussed and evaluated: are they positive or negative, satisfactory or not, how did changes
happen, etc. This is a process of individual judgment that will reveal many different opinions.

121 Addis Ababa University/School of Commerce


Table 4.6 Benchmarks for Impact Indicators

122 Addis Ababa University/School of Commerce


For this purpose, a rating for each indicator is helpful (e.g. from “5 change is considered very
good" to 1 "change is considered very bad"). The benchmarks for each indicator should
already be prepared at this stage, during a debate among all stakeholders. The questions
"Where are we?" and "Where do we want to be?" need to be asked in relation to each
selected indicator. The best possible realistic achievement for each indicator is 5 (very good),
and the worst possible achievement is 1 (very bad).

In preparing for impact assessment, some more important details need to be considered:
 Ideally, all stakeholders agree on a common rating for all impact indicators. But it can
also be interesting to carry out impact assessment separately for each stakeholder
group, and each group's findings will be communicated to the others.
 It should be determined at what level the assessment will be made (household,
community, etc.). For example, if there is a great heterogeneity of household categories
(such as poor and wealthy households), changes in their context should be assessed
individually, or at least separately for each household category. If all households are
judged together at the community level, the result will be an average. This average,
however, may not reflect important changes in individual households. It would thus be
meaningless!
 After a set of impact indicators has been selected, an initial observation (monitoring)
that takes all of them into account produces the baseline. In the first years to come,
monitoring and assessment will only include those indicators that are sensitive to short-
term changes. Indicators sensitive to mid- or long-term changes will gradually be added
after several years.

Step 5: Development and Application of Impact Monitoring Methods

Cost-Effective Monitoring Methods


How can impact indicators and the context be monitored and documented? Which methods
are applicable within the means and capacities of the project? How can methods best be
combined? There are usually several ways and methods of monitoring a parameter or
indicator. If highly accurate (scientific) data are required, it is assumed that a project will call
123 Addis Ababa University/School of Commerce
upon specialists who apply their own methods. In this case, there is no need to describe
these methods here. In the event that development projects do not have the capacity and
resources to apply sophisticated methods, the present document emphasizes cost-effective
monitoring tools that can be handled in a flexible way by project staff themselves. Three
types of monitoring methods are described below. They probably have the greatest chance
of being applied because they build on what many projects already practice. These tools can
be considered the basis for IMA, but project staff would still need to adapt them to the
specific project context, in accordance with the impact hypotheses formulated and impact
indicators chosen. Therefore, only general descriptions and explanations can be given here.

Triangulation
How good is the quality of the information obtained? If the budget for monitoring is low, not
all methods can be highly accurate. Therefore, the principle of triangulation is used, which
combines reliability with participation. This means that all individual perceptions which are
obtained through interviews and discussions must be cross-checked with the perceptions of
others and, if possible, compared with direct observations.

Brief Descriptions of Monitoring Methods


1. Interview and Discussion
Interviews and discussions with local stakeholders are the basis for IMA. The information
obtained can be very detailed but will be guided by individual perceptions and the different
(often hidden) agendas of the stakeholders. Although all kinds of visible and invisible
changes might be discussed, socio-economic aspects may dominate. A cross-check of the
information, in particular invisible (e.g. social) changes can be made through interviews with
other stakeholders. Visible improvements or deteriorations can be cross-checked with photo-
monitoring and participatory transect walks;

Almost all biophysical and socio-economic fields of observation can be monitored by


obtaining people’s opinions of them. Discussions can encompass, for example, gender
aspects, labor division, workload, wealth, production and market prices, household income,

124 Addis Ababa University/School of Commerce


land use and land management, resource degradation and protection, technological and
management innovations, etc. Packages such as RRA (Rapid Rural Appraisal), PRA
(Participatory Rural Appraisal), and PLA (Participatory Learning and Action) contain many
well-tested and cost-effective tools consisting of group exercises, semi-structured interviews,
informal discussions and visualization (mapping, modeling, rating matrices, causal
diagramming, and mind-maps). They are characterized as rather qualitative approaches
marked by "optimal ignorance" and "appropriate imprecision". These methods were
primarily designed for mutual learning, and therefore assist local people to gain confidence in
conducting their own appraisal and analysis and help external experts to understand local
perceptions.

2. Photo-Monitoring
Photo-monitoring provides an overview of visible changes in the project context, which may
be predominantly related to biophysical and economic issues. But photos require
interpretation and further investigation of the background. This can be done through
interviews and discussions, as well as during participatory transect walks, depending on
which aspects need further clarification;

Development cooperation is intended to initiate changes, and at least some of them should
be visible after a couple of years. Rural development projects, for example, should enhance
household income and living standards, which would then be visible in terms of better
housing and clothing, more children going to school, better means of private and public
transport, etc. Similarly, if land and resource management has become more sustainable, it
should be evident in improved crop stands, controlled soil degradation, effective
conservation measures, etc. Photo-monitoring is a comprehensive method for documenting
all visual changes that can be used to cross-check individually perceived changes.

Several series of photos from specific locations and standpoints taken at different times over
a longer period document how things change. Photo documentation can range from
overview pictures (e.g. showing an entire slope, valley, farm, village, etc.) to detailed views of

125 Addis Ababa University/School of Commerce


specific objects (houses, rooms, people, conservation measures, etc.). Where changes are
intended and expected, photos can be taken from permanent standpoints at regular time
intervals. Complementary photos can be taken occasionally wherever and whenever
unexpected visible changes occur. However, photos alone do not tell much about how and
why changes occurred. They provide an overview that requires further discussion and
interpretation with stakeholders at regular intervals.

3. Participatory Transect Walk and Observation


Observations made and discussed during a participatory transect walk provide a detailed
view, especially of biophysical issues, although social and economic issues can also be
addressed. A transect walk highlights the spatial interrelations of soil degradation and
nutrient, water and energy flows, etc. Discussions often start with visible aspects but can
ultimately include links with invisible aspects. A transect walk is an excellent opportunity to
identify local impact indicators. The information can be cross-checked with interviews and
photo-monitoring.

The fact that interviews and discussions with people bring to light useful information for IMA
should not lead to the conclusion that direct observations and measurements by project staff
or outsiders are no longer necessary! Particularly biophysical and some economic aspects can
be directly observed in the field to cross-check the results of other methods. A participatory
transect walk will not only provide a detailed view of a farm or valley, critical sites of resource
degradation and areas of promising management. It will also help to establish connections
between those sites, i.e. flows of nutrients, water, sediment and energy. Thus regular
transect walks, as well as farm and field visits are not only recommended to maintain close
contact with local stakeholders and their reality. Different indicators and parameters also
require different observation times. For example, pests and diseases are observed during the
cropping season, production during harvest, soil degradation at the onset of a rainy season,
water shortage during the dry season, etc.

126 Addis Ababa University/School of Commerce


The following principles and guiding questions provide assistance when adapting monitoring
methods to a specific project situation.

Table 4.7 Principles and guiding questions provide assistance when adapting monitoring
methods

Step 6: Impact Assessment

Assessing Changes in the Project Context

How did the context change in the eyes of different stakeholders? What did they learn from
these changes? In Step 4 (selection of impact indicators) stakeholders prepared an
assessment (fixing benchmarks and rating). Impact indicators can be grouped and placed
according to dimensions of sustainability (social / institutional, economic, ecological), in order
to visualize in which dimensions changes are moving towards or away from sustainability. All
units (e.g. kg, minutes, tons, etc.) have already been converted into a neutral numeric scale

127 Addis Ababa University/School of Commerce


ranging from 5 (change considered very good) to 1 (change considered very bad). Is the
change achieved in all indicators satisfactory? If not, which indicators or which dimensions of
sustainability show weak monitoring results? What might be the reasons for a remarkable
good or bad rating? How did the changes come about? Is there a need to adapt the project's
plan and activities?

Attribution – Assessing the Impact of the Project


How can these changes be attributed to the project? Were there additional changes that
were not expected and, therefore, could not be covered? Which changes contribute to the
goal of the project? Due to the attribution gap (Clarification of Terms) it is not easy to
attribute changes to a project. The challenge is rather to find plausible relations between the
project's outputs and the changes rather than scientific proof.

Changes in the context can be considered the result of social processes, i.e. interactions
between individuals or groups, such as learning, adaptation, communication, decision,
integration, etc. The project "only" tries to trigger or strengthen these processes with its
outputs. For example, any new technology must be utilized and adapted or rejected by
stakeholders; members of a society communicate their experience and learn from it; when
the biophysical environment or the economic situation changes, people adapt their
perception and react to it. The question for a project is whether the project outputs have
stimulated changes and social processes, and whether these processes are likely to help
reach development goals.

Follow-Up
At this stage, the next phase of project management begins. Assessment and the attribution
of changes will be used to make the necessary strategic adjustments in the project. At the
same time, the IMA system needs to be adapted as well. In order to achieve positive impacts:
 Are there new stakeholder groups that should be involved during the next project phase
(Step 1)?
 Is the analysis of the project context still relevant and representative (Step 2)?

128 Addis Ababa University/School of Commerce


 Do the impact hypotheses have to be revised or supplemented, after initial changes and
impacts appear (Step 3)?
 Is the selection of impact indicators still relevant, and can it represent all important
changes (Step 4)?
 Did the monitoring methods applied produce useful data and information? How can
methods be optimized or simplified? What should be added or omitted (Step 5)?
 Was the impact assessment satisfactory or does it need to be modified (Step 6)?

What are the challenges of Impact Assessment?


It is now generally accepted that evaluation needs to evolve from its earlier focus on
assessing outputs and outcomes to directly addressing impact. Development partners are
increasingly seeking to improve their assessment approaches and techniques to help them
make their impact findings robust, although there are methodological challenges to be
overcome. This the core of the validation challenge for measuring the impact. What are the
strategies for overcoming this challenge? In general terms efforts can be made to tackle the
validation challenge by ensuring that wherever possible three basic questions and principles
of assessment are built into the evaluation design.

1. What was the situation before the project? Provision of evidence for the project
indicators are chosen prior to, or at the beginning of the project. Data collected at this
time is normally referred to as ‘baseline’ data and acts as the starting benchmark for the
evaluation work.
2. What has happened after the project has occurred? An ability to provide evidence
relating to and on the output and outcome indicators chosen for key target beneficiaries
of your project. This evidence when combined with the baseline will provide a basis by
which directly comparisons can be made of the circumstances, experiences, attitudes and
opinions of those to whom the project is directed both before and after.
3. What has happened because of the project? An ability to assess whether impact has
occurred due to the project requires some form of assessing results ‘with’ vis-à-vis
‘without’ the project. This is usually achieved by assigning some form of control or

129 Addis Ababa University/School of Commerce


comparator group who have not had the opportunity to benefit from the project but
whose situation/performance can be measured alongside the key beneficiaries of the
project. These will be the comparator group and play a major part in helping to address
the validation challenges of attribution and the counterfactual.

Different evaluation approaches with their associated methodologies make provision for
attribution and the counterfactual to a greater or lesser extent. Three of the main
approaches to evaluation given below which also assesses the degree to which they help
overcome these validation challenges.

Table 4.8: Evaluation Approaches


No Evaluation Approaches
1 Non-experimental
 Post-program judgment/expert opinion. (PPJ) Here the program participants are
consulted after the intervention and asked to estimate the extent to which
performance was enhanced as a direct result of the program
 Before & After assessment (BAA). As the name suggests, this is a way to measure
change by consulting with the program participants and measuring program
indicators before (baseline data/information) and after receiving the intervention.
2 Quasi-experimental
These approaches compare intervention participants and some form of non-
intervention control or comparator group both before and after the intervention.
Different rationales are used to assign control groups but this is undertaken in a non
randomized way.
3 Experimental
This approach looks at two groups before and after the intervention. There should be
random assignment of the population into the project or treatment group who receive
the intervention services and a control group, who do not.

For all three approaches, consideration should be given to:


 The underpinning principles of the approach and how it is used in practice.
 Its application, if any, to evaluating the impact of the project and
 The strengths and weaknesses of the approach vis-à-vis the other impact evaluation
designs.

130 Addis Ababa University/School of Commerce


1. Non-experimental
These evaluation approaches are relatively easy to design methodologically, and are less
expensive and complex to implement than experimental and quasi experimental designs.
They are widely used in project and program evaluations, especially for smaller scale
interventions. However, there are very few checks if any, to address causality issues or to
counter any potential bias in results arising from any sampling processes used.

Post-Program Judgment (PPJ)


Post-Program judgment (PPJ) is based on assessing the ‘after’ situation and is the simplest
form of evaluation technically, the cheapest cost wise and hence is widely used. PPJ is
undertaken by examining the conditions and experiences of the key project stakeholders
after the intervention activity has taken place. In this design, no baseline assessments are
taken for the selected target individuals or groups. Impact evaluation is undertaken purely on
the basis of measurements and assessments made after the intervention or activity has
taken place. In this way the impact is measured on the basis of the stakeholders’ own
understanding and reporting of the changes they have experienced both since and as a result
of the intervention activity. There is no a-priori measure to act as a benchmark against which
to compare the changes and experiences reported by the target group.

A key element for ensuring that the approach is as robust as possible is the use of rigorous
sampling techniques in selecting relevant and representative subjects for the evaluation
exercise. Where possible the target groups should be selected randomly. For example, if a
business simplification intervention is trying to improve the operating conditions for
construction businesses in city ‘A’ then a sample of existing construction businesses who
have been operating in city ‘A’ would be selected for the impact evaluation rather than
printing businesses or construction business just starting in city ‘B’.
Post Project Judgment Evaluation for BEE Reform
Strengths Limitations
 It is low cost compared to other  This approach relies on program

131 Addis Ababa University/School of Commerce


designs. participants or independent experts to
 Often the only option available when make judgments concerning impacts
there are data and budget constraints. with no control for the counterfactual.
 The design captures data on change at  Care needs to be taken to make sure that
only one point and so is easier to people consider the counterfactual in
conduct than having to identify and their assessment of impacts.
select control groups.  The design does not attempt to
 Several programs have been evaluated understand any changes that have
utilizing this approach and so there is occurred and assumes that they have
practical experience to draw upon. occurred as a result of the project.
 Does not capture process issues from the
project implementation.

‘Before and After’ Assessment (BAA)


BAA in practice
As the name suggests, a ‘Before and After Assessment’ examines the experiences and
circumstances of a given target group of target stakeholders both before and after they have
experienced the intervention using a selection of indicators. The aim is to establish if any
changes in the indicator criteria have taken place for the identified target group. These
changes in the indicator criteria are then analyzed in order to determine the impact of the
intervention.

A key element for ensuring that this approach is as robust as possible is the use of rigorous
sampling techniques. Ideally the target groups for the evaluation should be selected
randomly and within the parameters of the specific stakeholder population. The target
groups selected for BAA must be:

 Relevant to the project being examined: they must come from those individuals and
groups who are key stakeholders for the intervention activity being evaluated.

132 Addis Ababa University/School of Commerce


 Representative of the key stakeholder population: they should be the type of individuals
or groups that are directly involved in and/or likely to be affected by the intervention
activity being evaluated. In BEE start up reforms typical sample groups will be: new
businesses, businesses operating informally that are now formalizing and government
officials who are involved with this area of activity be this at policy or an operational level.
If interventions apply to a specific location or a specific sector then only participants’ from
these areas and or sectors will be considered for selection.
 Representative of any diversity within the key stakeholder population: if the target
group is very diverse in terms of its characteristics – age / size / gender / location etc. – it
may be necessary to ensure that a proportion of groups or individuals from each of these
sub groups are represented within the sample selected. This is known as stratified
sampling. If the intervention is being undertaken throughout an area with distinct sub
districts where conditions relating to the area vary, then it would be important to ensure
that the sample group selected included representatives from these different groupings.

Taking these sampling factors into account and establishing a relevant and representative set
of individuals or groups will also help to determine the total numbers to be included in the
evaluation group. Alternatively, it could be done if the simplified procedure is being rolled out
as a pilot so that control and treatment groups can be identified. It should be noted that the
ethical and political considerations of undertaking this type of study make it challenging.

Before and After Assessment


Strengths Limitations
 This design attempts to capture and  The design cannot isolate the impact
understand any changes that have occurred of the program from extraneous
rather than assuming that they have factors such as selection bias,
occurred. maturational trends, secular drift and
 Individuals are asked to estimate the extent interfering events.
to which performance was enhanced as a  This approach relies on program
direct result of the program – in effect, to participants or independent experts
compare current performance to what to make judgments concerning
would have happened in the absence of the impacts.
program  This approach requires people to be

133 Addis Ababa University/School of Commerce


 Working with the same group is cheaper able to determine the net effect of
than identifying and selecting control groups the intervention based solely on their
which is often simply not possible. own knowledge and experience.

2. Quasi-Experimental Designs (QEDs)


In QED approaches, explicit attempts are made to address the validation challenges of
attribution and the counterfactual when evaluating the impact of an intervention. This is
achieved by setting out to examine changes experienced by the project target group
(sometimes called the ‘treatment group’) i.e. those ‘experiencing’ the intervention, and
comparing them to a set of people ‘not experiencing’ the intervention. This is usually tackled
by assigning some form of control or comparator group i.e. a group who have not had the
opportunity to benefit from the intervention but whose characteristics are similar to those
that have, and whose situation/performance can be measured alongside the key beneficiaries
of the project.

A control or comparator group is created or selected that is composed in a non-random way,


but provides the counterfactual to a ‘treatment group’ to the extent that the two groups are
similar, observed differences can be attributed to the BEE intervention being evaluated with
a higher degree of confidence than in the simpler PPJ and BAA approaches. Several
methodologies are used for creating control or comparator groups. One of the most widely
used is that of matched comparisons. Matching involves identifying non–project/program
participants comparable in the essential characteristics to participants.

Both groups should be matched on the basis of either a few observed characteristics or a
larger number of characteristics that are known or believed to influence program outcomes.
In practice, it is rarely possible to construct a 100% perfectly matched control group, or even
to measure all possible relevant characteristics. Nevertheless, matching can be achieved for
key characteristics and this is widely regarded as a rigorous methodology when evidence is
available to show that treatment and control groups are similar enough to produce a close
approximation to the perfect match.
Quasi Experimental designs for BEE Reform

134 Addis Ababa University/School of Commerce


Strengths Limitations
 These designs faceless of the ethical or  The reliability of results is highly
political problems of excluding groups dependent on the ‘matching methods’
from the reforms and their benefits. which can be difficult to conduct.
 They can often draw on existing data  Valid comparisons require that the two
sources and are thus often quicker and groups be similar with respect to key
cheaper to implement. characteristics, exposure to external
 They are well used in practice albeit events and trends, and propensity for
outside of the PSD /project field. program participation. This can be
 Matching is a relatively easy process difficult to establish.
compared to randomized allocation.  Because the two groups are essentially
 There are a variety of methods to use in ‘non-equivalent’, the possibility exists
generating or selecting comparator that at least some of the differences in
groups depending on the nature of the outcomes may be explained by
activity being evaluated. unobserved variables that differ across
the two groups.
 Requires considerable expertise in the
design of the evaluation and in analysis
and interpretation of the results.

3. Experimental Designs
Bias can occur for a host of reasons and take many different forms. For example, sampling
bias occurs in the selection of target groups when only those who have offices within a short
distance of the one stop shop are included. As noted earlier in this section, practical attempts
are made to mitigate this bias by the hiring of external experts who are not connected with
the project and have the technical expertise to ensure that appropriate methodology design
and sampling is conducted. However some would argue that the only robust way of tackling
bias is by using experimental designs in evaluations. Randomization is a key feature of
experimental approaches. This is considered the most rigorous of the evaluation
methodologies, the ‘gold standard’ in evaluation. This is especially the case when we are
trying to estimate the effect of an intervention on a complex concept of the BEE.

In a randomized experiment, the researcher cannot manipulate the group who are ‘exposed’
to the intervention (the “treatment group”) and the not-exposed group (the “control

135 Addis Ababa University/School of Commerce


group”). Randomization ensures that, on average, prior to the intervention, treatment and
control groups are essentially identical and therefore would show very similar results in the
absence of the treatment. Therefore, a difference in results for the two groups can be
causally attributed to the program. This design copes with the challenge of attribution and
the counterfactual.
Experimental Randomized designs
Strengths Limitations
 Random assignment helps guarantee  Denial of assistance to some is seen as
that the two sample groups are similar. unethical.
 Extraneous factors that influence  It can be politically difficult to provide
outcomes are present in both groups. an intervention to one group and not
 Because of this comparability, claims another.
that differences between the two  The scope of many BEE reforms are
groups are the direct result of the nationwide programs or policy changes
program are more difficult to refute. which rule out the possibility of
Interpreting the results is simple. selecting a control group although
 Experimental designs are used encouragement design can potentially
extensively to test the efficacy of new help address this.
treatments in health, social welfare and  It may be difficult to avoid selection bias
education. and ensure that assignment of
treatment and control groups are truly
random.
 It takes significant planning and
management to ensure that the services
provided to both entities are exactly the
same.
 Experimental designs can be expensive
and are time consuming.
 Requires high level evaluation skills.

What is the best approach?


The reality of current practice in assessing the impact of project interventions is that there is
much wider practice of simple post program judgment and before and after approaches than
quasi experimental approaches. Efforts are being made, with strong leadership from the
Results Measurement team, to improve awareness of and the technical capability for

136 Addis Ababa University/School of Commerce


applying QED approaches to evaluation work along with other resources reflects this
movement to ‘upgrade’ the rigor of evaluation for the project..

Table 4.9: Summary of key characteristics for different evaluation approaches for impact
Evaluation activity Post Before Quasi Experimental
Program and After Experimental
Judgment
Post project assessment V V V V
Before project assessment X V V V
Use of target groups V V V V
Use of control groups X X V V
Use of randomly selected X X X V
groups
Level of technical skills Low Medium High Very High
needed to design
Cost of undertaking Low Medium High Very High

4.4. Forthcoming developments in M&E

Practice in M&E for project is currently being developed rapidly and new techniques and
tools being developed all the time. Measurement, quantification and evidence-based policy
making are becoming increasingly dominant features in the approach of many countries. In
summary the issues of monitoring evaluation and assessing impact for project is a hive of
development and debate. This material presents a resource that brings together examples
from current practice in order to help raise awareness, engage interest and improve good
practice across different projects.
Activity 2
Answer the following questions.
1. Factors affecting the quality of M&E information.
2. The measurement of impact is challenging, can be costly and is widely debated. Briefly
discuss some of the challenges of measuring impact.

137 Addis Ababa University/School of Commerce


Summary
 The imperative to improve development results has generated a demand for the
effective evaluation.
 Evaluation can take place alongside project design and appraisal - it is not exclusively an
ex post activity
 Who undertakes evaluation is an important consideration and can affect levels of
inclusion and diversity.
 A distinction can be drawn between review evaluations and assessing impact based on
the timing, focus and then related to the type of ‘results’ achieved _ There are
essentially three tasks: which questions; what data/information and what success
criteria to employ
 The compilation of good quality baselines are critical for meaningful impact assessment
and must be produced wherever possible.
 Experience and practice is growing and innovative approaches are being tried. The
honest sharing of experience will improve the ability to undertake evaluations.
 The adoption of robust impact designs and methodologies is essential in order to
address the validation challenges of attribution and the counterfactual.
 While investment and economic growth are the primary indicators of success, social
inclusion and poverty alleviation considerations will affect long term sustainability.
Improving the integration of equity and sustainability issues is critical to the broader
understanding of impact.

138 Addis Ababa University/School of Commerce


Self Assessment Question-4

Dear learner, if you understood this unit very well, attempt the following questions and
evaluate yourself against the answers given at the end of this unit.

Case Analysis
Project/programme M&E steps are guides to planning for and implementing an M&E system
for the systematic, timely and effective collection, analysis and use of project/programme
information. They are interconnected and should be viewed as part of a mutually supportive
M&E system. Develop your own six key Steps for Project/ Programe M&E System adapting
the approaches implemented in various projects.

139 Addis Ababa University/School of Commerce


Answer Key to Activities and Self Assessment Questions

Activities

Activity 1

1. M&E plans are becoming standard practice – and with good reason. M&E plans serve
as critical cross-checks of the log frames, ensuring that they are realistic to field
realities. Another benefit is that it helps to transfer critical knowledge to new staff
and senior management, which was particularly important with projects/programmes
lasting longer years. A final point to remember is that it can be much more timely and
costly to address poor-quality data than to plan for its reliable collection and use.
2. Data analysis is not something that happens behind closed doors among statisticians,
nor should it be done by one person, e.g. the project/programme manager, the night
before a reporting deadline. Much data analysis does not require complicated
techniques and when multiple perspectives are included, greater participation can
help cross-check data accuracy and improve critical reflection, learning and utilization
of information. A problem, or solution, can look different from the perspective of a
headquarters’ office versus project/programme staff in the field versus community
members. Stakeholder involvement in analysis at all levels helps ensure M&E will be
accepted and regarded as credible. It can also help build ownership for the follow-up
and utilization of findings, conclusions and recommendations.

Activity 2

1. Though the he measurement of impact is challenging, costly and is widely debated, it


does not mean we should not try to measure impact; it is an important part of being
accountable to what we set out to achieve. However, we should be cautious and
understand some of the challenges in measuring impact. Typically, impact involves
longer-term changes, and it may take months or years for such changes to become
apparent. Furthermore, it can be difficult to attribute observed changes to an
intervention versus other factors (called “attribution”). Despite these challenges,

140 Addis Ababa University/School of Commerce


there is increasing demand for accountability among organizations working in
humanitarian relief and development. Therefore, careful consideration should be
given to its measurement, including the required time period, resources and
specialized skills.

Self Assessment Question-4

1. Six key steps for project/programme M&E system


Six key steps for project/programme M&E
Step 1: Identify the purpose and scope of the M & E system.
Activities
 Review the project/programme’s operational design (log frame).
 Identify key stakeholder informational needs and expectations.
 Identify any M&E requirements.
 Scope major M&E events and functions.
Step 2: Plan for Data Collection and Management.
Activities
 Develop an M&E plan table.
 Assess the availability of secondary data.
 Determine the balance of quantitative and qualitative data.
 Triangulate data collection sources and methods.
 Determine sampling requirements.
 Prepare for any surveys.
 Prepare specific data collection methods/tools.
 Establish stakeholder complaints and feedback mechanisms.
 Establish project/programme staff/volunteer review mechanisms.
 Plan for data management.
Step 3: Plan for Data Analysis.
Activities
 Develop a data analysis plan, identifying the:
1. Purpose of data analysis.
2. Frequency of data analysis.
3. Responsibility for data analysis.
4. Process for data analysis
 Follow the key data analysis stages:
1. Data preparation.
2. Data analysis.
3. Data validation.
4. Data presentation.

141 Addis Ababa University/School of Commerce


5. Recommendations and action planning.
Step 4: Plan for information reporting and utilization.
Activities
 Anticipate and plan for reporting:
1. Needs/audience
2. Frequency
3. Formats
4. People responsible
 Plan for information utilization:
1. Information dissemination
2. Decision-making and planning
Step 5: Plan for M&E human resources and capacity building.
Activities
 Assess the project/programme’s HR capacity for M&E.
 Determine the extent of local participation.
 Determine the extent of outside expertise.
 Define the roles and responsibilities for M&E.
 Plan to manage project/programme team’s M&E activities.
 Identify M&E capacity-building requirements and opportunities.
Step 6: Prepare the M&E budget
Activities
 Itemize M&E budget needs.
 Incorporate M&E costs into the project/programme budget.
 Review any donor budget requirements and contributions.
 Plan for cost contingency.

142 Addis Ababa University/School of Commerce


Unit 5
The Project Cycle of Monitoring and Evaluation
Introduction
Hello dear learner! This is the last unit of the module titled ‘The Project Cycle of Monitoring
and Evaluation’. Good practice suggests that to be effective, Monitoring & Evaluation should
be addressed as part of project planning and integrated alongside project implementation
and management systems. Attention should be given to both the processes and content of
doing Monitoring and Evaluation and Impact Assessment. The central challenge for the
Project, Program or Task Manager (PM) is to balance the needs of the two key functions of
Monitoring and Evaluation, i.e., the legitimizing and learning function (or proving and
improving) with the overall demands of the project cycle. This unit will explore what steps
the PM needs to take in order to integrate the Monitoring and Evaluation with the needs of
program implementation since the two are not mutually exclusive processes.

Learning Objectives:

At the end of this unit lesson, you will be able to:


1. Explain the Project Cycle of Monitoring and Evaluation;
2. Outline the key steps in undertaking Monitoring and Evaluation;
3. Identify the basic factors to be considered in Monitoring and Evaluation design;
4. Develop Monitoring and Evaluation plan;
5. Implement the Monitoring and Evaluation plan;
6. Analyze Monitoring and Evaluation findings;
7. Communicating Monitoring and Evaluation findings.

143 Addis Ababa University/School of Commerce


5.1. Introduction
The following seeks to make explicit how the key steps in undertaking M&E 5.1) relate to the
key steps in the project cycle.

Fig. 5.1: The key steps in undertaking M&E

• Agree the starting point.


Step 1

• Identify the approach and securing the budget.


Step 2

Step • Implement the M&E plan.


3

Step 4 • Analyze the findings.

• Communicate the learning.


Step 5

5.2. Agreeing the starting point


What is the context for developing the M&E and IA?

In an ideal world, decisions about M&


M&EE and Impact Assessment would be made at the earliest
stage of the program. There may only be some basics characteristics about the proposed
project and the context in which it will be take place. There are still some important decisions
to be made as suggested in Table 5.
5.1.
Table 5.1: Making early decisions
Define
Is the project
 A pilot or a roll-out;
 Operating at a national or sub
sub-national level;
 A short, medium or long term intervention (the timescale);

144 Addis Ababa University/School of Commerce


Identify:
 The key implementers (government officials, politicians, businesses, business
associations, in-country staff, consultants: local and/or international);
 The primary beneficiaries (business owners, government officials);
 Who funds the reform and whether it is a multi donor intervention;
 Who provides resources for M&E;
 Whether there are additional partners;
 Who has the skills and is available to undertake M&E work in the team/organization

These are all vital to getting a ‘feel’ of what the nature and scope, the resources involved and
a sense as to whether there is any interest and or commitment to M&E by the various
stakeholders of the project. This information provides the context in which M&E will be
designed.

Who should carry out the M&E and IA?


In many multi-lateral and bilateral organizations, responsibility for M&E is split between
different sections within the organization. Responsibility for ongoing monitoring is usually
undertaken by the local program team together with their counterparts in local partner
organizations. Responsibility for evaluating immediate outputs and outcomes is also usually
undertaken by the local team but with support from external consultants and specialist M&E
staff. These could be local and/or from the organization’s central evaluation department.

Impact assessment is not usually a program team’s responsibility per se but one that is
undertaken by external consultants and/or evaluation specialists within the organization.
However the program team is responsible for ensuring that their monitoring systems and
evaluation findings provide evidence for impact assessment, and therefore they need to be
aware of what and how impact assessment is undertaken.

The PM must have oversight of what is needed for implementation, an ability to


demonstrate what has been done, how it has been done, what has been measured and what
results have been achieved. Furthermore, PMs need to be confident that evaluators and
impact assessors will find the data they need on the project and on a comparator group or
control group. The responsibility for the actual design may vary from project to program and

145 Addis Ababa University/School of Commerce


from organization to agency. However, the PM must understand the requirements for M&E
and be able to integrate and translate between M&E and program management needs.

5.3. Identifying the approach and securing a budget


Designing an M&E approach is typically an iterative process involving several versions of an
M&E plan. Here we are looking at the tasks of M&E design. The program manager will not be
responsible for all the tasks but will need to understand and influence and perhaps have the
final decision-making authority. There are typically six factors to consider in M&E design, prior
to pulling together a budget and bringing this together into a formal plan. All these factors
are covered in this unit (Table 5.2).

Table 5.2: The six factors to consider and sources of information


1. Questions Identify the key questions to be asked and answered by the M&E.
2. Approach Agree the overall M&E approach and methodology.
3. Indicators Choose the appropriate indicators.
4. Data collection Select tools and instruments for data collection and analysis.
5. Timeframes Plan clear time frames with milestones.
6. Resources Identify people and other resources for undertaking the M&E.

The following section walks through each of the six preparation aspects.

1. Questions: Identify the key questions to be asked and answered by the M & E.
Usually the easiest way of establishing key questions is to look at the project Log Frame or
the equivalent project planning document. For example:
Monitoring questions:
1. How many procedures does it take to register a business currently and then after
reforms?
2. How many and which government authorities need to be engaged in the reform
efforts?
3. How many and which government officials need to be trained to undertake the
change needed by the reform?
Evaluation questions:
1. Have laws/regulations changed because of reform work?
2. Has the cost of registration for each process changed under reform?

146 Addis Ababa University/School of Commerce


3. Have there been changes in the time taken in registering?
Impact questions:
1. Do more businesses register following reform?
2. Are these new business start-ups or existing (informal) businesses registering for the
first time?

Identifying the key questions to be answered in M&E is discussed in the previous Section.
The PM quick checklist
1. Does this project have a log frame?
2. What is the learning from previous Projects of this type?
3. What are the key questions I need to answer in my M&E?
4. What will I have to do to integrate the program management with the M&E cycle?

2. Approach: Agree the overall M&E approach and methodology.


Monitoring and evaluation are different but contingent and complementary. For monitoring
the key thing to consider is whether the project plan includes management systems and
practices that will ensure the gathering, recording and reviewing of project inputs, activities
and outputs on an ongoing basis. The task of evaluating outcomes and assessing impact
should be to ‘prove’ (as far as possible) or ‘validate’ and have the capacity to communicate
learning. The particular evaluation approach and methodology selected will have to match
the scale and nature of the project, fit within the resources and timeframe of the
intervention.

Good practice suggests that it is vital to make sure that informed decisions about the
methodology and approach are taken at the earliest stage of the project design.
The PM quick checklist
1. Can I confidently select the best M&E approach and methodology?
 Quasi-experimental designs
 Non-experimental designs
2. What has been learned from previous designs?
3. Can I create a robust baseline from existing sources or do I need primary data?
4. Do I know who and how to sample?
5. Do I know who to talk to for advice and guidance?

147 Addis Ababa University/School of Commerce


3. Indicators: Choose the appropriate indicators.
Once key questions have been identified these need to be translated into indicators and then
targets. These are the things that are going to be measured in order to demonstrate that the
project is or is not doing what it set out to do. Remember, indicators need to be identified for
all aspects of the project’s work from activities through to the overall objective or goal of the
project.
The PM quick checklist
1. Does my organization use core indicators?
2. Do I have a mix of quantitative, core and customized, activity and process indicators?
3. Can the results be compared to other similar projects?
4. Can I disaggregate for diversity?

4. Data collection: Select tools and instruments for data collection and analysis
At this stage, a quick audit will show what information is available through existing
documentation. Plans about what needs to be generated through project data collection and
how best to do this can be agreed. Table 5.3 presents a simple audit sheet for doing this.

Table 5.3: Auditing data needs and sources for evaluation

Aspect of project Is there sufficient If no what What tools would


/Evaluation Criteria information from information is be best to use for
the existing written needed capturing this
documentation (Y / N) additional data?
Inputs /activities
Outputs
Outcomes
Impact
Relevance
Efficiency
Effectiveness
Sustainability
Other factors
relevant to a Project

148 Addis Ababa University/School of Commerce


Selecting tools for data collection and analysis should now become very straightforward as
this is very closely linked to the methodology. Some questions will be suited to collecting
quantitative date and others to process and more qualitative data. In previous sections, there
is a checklist rating the main data collection tools against various criteria.
The PM/TM quick checklist
1. Is all the data I need available from secondary sources?
2. Can I get partners to collect data?
3. How often should the various data sets be collected?
4. Do I know who is responsible for analyzing the data?
5. Do I know the how and who of communicating the analysis?

5. Timeframes: Plan clear time frames with milestones.


PM skills are vital in planning for M&E work. Data collection needs to be undertaken at
different times: prior to, during project implementation, at fixed points including at and after
the end of the project. It is useful to put this together as some form of timetable, such as a
Gant Chart (using software such as Microsoft Project). A Gant chart can be used as checklist
both by the M&E and implementation teams and should work alongside the time frame for
overall project implementation. Where there are more complex needs, a review of the
minimum and maximum timeframes is useful, taking into account the time required to
tender, prepare documents for appointed consultants, allocate time for briefings and
reporting.

Fig 5.2: Planning example of time scale implementation of a large-scale evaluation

149 Addis Ababa University/School of Commerce


Reporting arrangements should also be made explicit. The work involved in ensuring all
Partners and stakeholders are adequately engaged can be easily under estimated. Multi-
component programs may operate an entirely different timescale, taking place over some
years and involving several development partners. It is not unusual to find that an M&E
project in its own right is warranted in order to prepare and plan for M&E. A major feature of
this work is not just setting up a framework, but all the institutional factors surrounding it.
This includes building a reporting structure, engaging agencies in data collection, building
capacity, working with local survey firms, and especially engaging with the Private Sector by
getting them involved and using M&E outputs as a way of building support for reform. The
implication for the PM of not being involved in the planning of M&E is likely to have a
negative impact on the proposed timeframes.
The PM quick checklist
1. Can I describe the milestones of the project in relation to the M&E needs?
2. Who needs to know the timeframe for evaluations?
3. Will there be multiple stakeholders/development partners involved?
4. What will be the time implications for commissioning external experts?
5. Who will sign off reports and documentation for communication?

7. Resources: Identify people and other resources for undertaking the M&E.
Worked through steps 1-5 will result in a clear perspective on what form and level of skills and
experience will be needed for undertaking the proposed M&E work. Note that resources for
dissemination of the findings and experiences are not always put in place and there is no
point in having developed the entire above if there is any opportunity to show-case the
success.
The PM quick checklist
1. Which of the internal M&E team will be involved with working on this project?
2. Does there need to be any capability building undertaken for this to take place?
3. How will findings and learning be disseminated?
4. What tasks need to be undertaken by an external consultant – local or international?
5. Where will the funds come from?

150 Addis Ababa University/School of Commerce


Once the above has been agreed then it needs to be captured in some form of project
management framework for the M&E work showing tasks, responsibilities for partners,
internal stakeholders and external consultants.

Putting together an M&E budget


The cost of M&E is increasingly an issue. As development agencies explore more robust ways
of measuring development results, questions about the costs and efficiency of doing M&E
arise. Resistance to undertaking substantive evaluation activities, beyond the simple end of
project round up, is often put down to cost. The argument being that resource used on M&E
is better invested in the aid intervention itself to maximize benefits to those targeted.

The issue of cost is a valid and important concern for M&E and the Principles for Evaluation of
Development Assistance require the efficient undertaking of M&E as well as efficient project
delivery. The overall budget for and scope of M&E activities for any given project must bear
some relationship to the scale and scope of the aid intervention being assessed. Larger more
complex projects addressing large populations of businesses and/or people will usually have
more extensive and hence expensive M&E systems. Similarly an innovator project may
warrant more effort and resource for M&E because of having to develop new approaches.
Likewise a pilot type of activity may involve more intensive M&E work over a shorter period
of time in order to assess whether or not it should be ‘rolled out’ more widely.

How much should be allocated?


Once the contents of the M&E design have been established then everything needs to be
costed and brought together into a budget for M&E. Again this may involve an iterative
process. The budget has to balance the available resources for M&E against the needs of the
M&E framework and plan that have been put together.
What does an M&E budget typically include?
 Human resource – internal staff, including any training needed
 External consultants
 Materials, equipment
 Travel

151 Addis Ababa University/School of Commerce


 Data collection (baseline and follow-up)
 Data analysis
 Seeking and managing stakeholder involvement
 Reporting and communicating findings, internally and externally
 Printing

If the methods, tools, and staff options chosen exceed the available budget then this will
need to be reviewed. Different more restrictive choices have to be made on the methods and
tools to be used or more resource needs to be negotiated. The budget should be
benchmarked in three ways against:
 the costs of other similar M&E activities;
 the M&E of similar projects; and
 the ‘rules of thumb’ i.e., an upper limit of 5% of the overall project budget, except for
experimental or more substantive projects where a guide of nearer 10% is usually given.

Who manages the budget?


The budget may not all be managed in one place or by one individual. As discussed, some of
these activities for M&E (particularly monitoring) form part of the routine collection of data
on the activities and outputs of the reform and may be undertaken by partners or the project
team. However, computer programs or training may need to be developed to ensure
accurate and timely data gathering and recording. This may be allocated to other budgets. An
impact assessment may be required and paid for by a specific donor rather than from the
program. All of these factors need to be taken into consideration when developing a budget
and in reporting ‘rules of thumb’.

M&E budgets have been what might be termed ‘outline budgets’ primarily concerned with
evaluation activities and focusing on covering the costs of end of project evaluation and
inputs from external consultants. The increasing focus on ‘proving’ development results and
the development of more detailed and sophisticated M&E practices means there is an
imperative to put together more detailed M&E budgets and plans.

152 Addis Ababa University/School of Commerce


The PM quick checklist
1. Does my organization have a rule of thumb for M&E budget?
2. Will some of the M&E activities be undertaken by other stakeholders?
3. Have I included a budget allocation for dissemination?
4. Who holds what aspects of the budget?

Activity 1
Answer the following questions.
1. How much money should be allocated for M&E?
2. Define the approach to be used to monitor the project costs.

5.4. Implementing the M&E Plan


Once a program has been approved for implementation, the next stage is to set about
operational zing the M&E activities. The first task will be to update the M&E framework and
plan and completing a more detailed program management framework seeking to:
 Reflect any changes in the original time table;
 Detail M&E tasks and responsibilities identified and allocate to internal PM/M&E
officers;
 Prepare final TORs for any external consultant to co-conduct the M&E and agree
recruitment procedure and timetable; and
 Ensure M&E systems and reporting procedures and documentation are linked to project
reporting systems.

What are the key tasks for implementing the M&E plan?
The project manager has specific responsibilities for implementation. These are likely to
include:
 Briefing of internal PM/M&E officers on overall plan and their key role in monitoring and
evaluation work.
 Selection and briefing of external consultants for periodic evaluation work.
 Ensuring any baseline survey work is initiated.

153 Addis Ababa University/School of Commerce


 If adopting a quasi-experimental M&E approach, preparation needs to be made for the
identification and establishment of control groups alongside confirmation of the main
target group audience for the reform work.
 Ensuring monitoring systems for the capturing and recording of inputs activities
processes and outputs are put in place.
 Periodic data collection for the evaluation of outputs and outcomes are put in place
 Periodic data collection for the impact assessment.
 Review and updating of the log frame.
 Establishing forums for stakeholders.
 Identifying other interested parties.
 Developing a communications plan.

How should the data be recorded?


Recording monitoring data on inputs, activities and outputs is usually straight forward and is
guided by the project management and reporting systems for the project. This usually entails
collating numbers and reporting performance against targets set in the project document.
This does not require any special tools outside of the usual management reporting system or
expertise outside of the project team. How often the indicators and monitoring data is
updated, will depend on the nature of the reform, what is being measured and at what point
in the project this is happening. Some monitoring indicators may be measured monthly,
quarterly and/or annually.
 How many events have been held this month, how many officials trained this quarter?
Evaluation indicators discussed in previous sections are usually measured against milestones
over longer periods.
 What has been the reduction in the time and cost of business registration since the
reduction in procedures last year?

Recording data for quasi-experimental methodologies and large-scale surveys can require
specialist tools and expertise. Typically a statistical package is required to store and handle
data.

154 Addis Ababa University/School of Commerce


The PM quick checklist
1. How does the data relate to the outcomes of the program?
2. What aspect of the project does this data represent?
3. What biases should be noted?
4. How can the data be best presented to be understood and useful to all?
5. What are the shortcomings of the data and the data collection method?

How should findings be reported?


Mechanisms for reporting monitoring findings should be identified and agreed up front. Most
reporting will be undertaken through the organization’s project management systems.
Usually this will involve contributing to regular (monthly /quarterly) monitoring reports
together with periodic annual and milestone reporting.
The PM quick checklist
1. Will the proposed reporting system fulfill the information needs of the internal and
external users?
2. Is it adapted to the resources and the capacities of the program and its environment?
3. Will it fulfill both the ‘proving’ role of results against goals and the ‘improving’ role of
sharing learning and analysis?
4. Am I reporting the right things at the right time?

5.5. Analyze M&E Findings


Data is collected from M&E activities throughout the project and hence analysis of the
findings should be undertaken alongside this work. Undertaking analysis on an ongoing basis
and discussing findings as they are reported is important if the informing and learning roles
of M&E are to be achieved. The tools needed to undertake the analysis of the data collected
through M&E activities will depend upon and reflect the methodology adopted, the range of
data collection instruments used and the volume and nature of the data collected.

Data needs to be analyzed for different groups, compared between groups and over time
periods. External expertise may be required for the analysis of data, both in terms of
guidance as to what tools should be used and related to this, how data should be recorded
and stored as well as undertaking the actual analysis once the data has been collected.

155 Addis Ababa University/School of Commerce


It is typical to have four or five points in a project when there will be a need to analyze and
report results, in addition to the regular M&E reporting undertaken as part of project
management. Key points of analysis and reporting take place as follows:

 First stage baseline and mapping work. If a project involves undertaking a baseline or
mapping exercise then the findings from this work need to be analyzed and reported
quickly because they form an integral base from which the project proceeds and will
often determine what tasks will be progressed and which will not.
 Pilot phases or pilot work. A project may involve undertaking a pilot phase, where
something will be tested out with a group or a particular locality before the project is
‘rolled out’ further. Again it is important that the analysis of M&E data from this pilot is
undertaken thoroughly and quickly, as the findings from this are needed to inform the
progression of the project.
 Mid-term or periodic evaluative reviews - key findings from periodic evaluation work
usually from the midterm timeframe of the project onwards need to be analyzed and
reported in a timely manner as they illustrate whether the outputs of the project are
being achieved or not and whether process issues are progressing. The findings from
these mid-term evaluations inform the ongoing validity of the M&E plan for assessing
outcomes and impact for the project. If initial findings show that the project is not
achieving and or is achieving in an unexpected way then the M&E plan may need to be
reviewed and updated for the end of project evaluation activities. This analysis of
project/program results is based on objectives and indicators, results hypotheses and
results chains, data and information obtained from the results oriented monitoring.
 End of project evaluation. This is usually the most substantive analysis as it is bringing all
of the above together, as well as undertaking end of project evaluation data collection
analysis and reporting. This is the key time of activity for M&E work if findings are to be
processed and reported in a timely manner after the end of the project. Therefore
resources need to have been in place and tasks managed well during this period. This
evaluation will always involve external people – colleagues from the central evaluation

156 Addis Ababa University/School of Commerce


department and/or external consultants. Do not underestimate the time needed to bring
together the summative M&E data and findings.
 Post-project evaluation. Sometimes there is provision in the project for there to be an
evaluation after it has ended – a year or more afterwards - where the focus is on impact
assessment. Usually this is undertaken by a specialist within the organization and/or
external consultants, who are contracted to undertake this work, develop the analysis
and presentation of the results.

How to write up an evaluation


 Keep it simple.
 Make sure that the right information reaches the right people.
 Use a form of communication that catches the attention of the intended audience
 Communicate in a way that makes the information as understandable as possible to
each particular audience.
 Present the information on time.
 Involve the target group in deciding what and how to communicate.
 Use a standardized format to allow comparison.
 Indicate the reliability of the data.

The PM quick checklist


1. How many times will an analysis need to be prepared?
2. Who will prepare it?
3. How many versions will we need?
4. Should I use a standardized format?
5. How can the finding contribute to the learning?

5.6. Communicating M&E Findings


While M&E findings are regularly reported through project management systems as noted
above, it is not unusual to find that they are not communicated beyond this, either internally
and externally. It is so often the case that those involved in M&E, especially impact
assessment activities, devote a lot of time to the design and implementation of M&E systems
and not enough time to considering how their findings will be used.

157 Addis Ababa University/School of Commerce


If M&E practice is to fulfill both its learning and proving roles and its findings are going to
influence development thinking, policy and practice, then it is important to have a sound
dissemination strategy in place. This provides extensive guidance, Includes practical
examples and case studies on implementation, and recommendations for good practice.
The PM quick checklist
1. When is the best time to communicate M&E findings?
2. What is the message?
3. Who is the audience?
4. What is the best way to communicate?

When is the best time?


For the timing of findings there is a very simple rule: The longer the length of time between
data collection and presentation of findings, the lower the impact for ‘improving’ practice.
This is especially the case for external impact studies. Another key aspect of timing beyond
the imperative ‘don’t delay’ is to think: who is sharing the results and when is a good time for
them to hear and learn about things. Some issues to think about:
1. Institutions such as government tend to have a regular pattern of meetings and events.
Many of these fit into an annual cycle – use existing publications, committee meetings
and planned events to disseminate findings;
2. Time is money for many businesses and so when trying to disseminate and engage with
the private sector try to use a mechanism of delivery that they already use as part and
parcel of their business life - Business Association meetings and newsletters,
information sheets at registration offices or in banks;
3. Try to avoid busy times of year for the target groups - the end of March is the end of the
financial or tax year in many parts of the world. In rural areas harvest time might not be
a good time to engage agricultural enterprises.

What is the message?


The effective communication of M&E findings is critical for both the proving and improving
roles of M&E work.

158 Addis Ababa University/School of Commerce


1. Proving: Stakeholders want to know if the project has succeeded. Has it delivered what
it set out to achieve? If so in what way and if not why not? Sharing findings, especially
success with external stakeholders, not only validates the project but also helps to build
consensus and support for the reform process and private sector advocacy;
2. Improving: What did the experience of running the project show about that particular
form of intervention? What lessons can be taken for implanting this type of activity
elsewhere or with different target groups?

Who is the audience?


There are a wide range of stakeholders who will be interested in the M&E findings: both
internally, externally with immediate stakeholders, and with a broader audience. The box
below outlines four groupings of typical stakeholders who are all important to the effective
performance of a BEE interventions but relate to it from different perspectives. Their role and
position in relation to the project will determine the type of messages they are interested in
hearing.
Typical Audience Groups
The Accountable – those to whom the reform measure is accountable in operational and
cost terms. Who has instigated or paid for the reform measure? They will want to know that
their money has been well spent and the effort has been worthwhile. Those accountable
could be development partners, government ministers, government officers, and /or key
business organizations.

The Beneficiaries - those whose lives were to be made better by the project. Is the market
now a better place for doing business? They could be the private sector and the enterprises
themselves, or through the associations, chambers, and trade associations.

The Implementers – those who are involved in managing and implementing the day-to-day
activities that have been under reform. Can targets now be met more effectively and
efficiently? They would be primarily government officers, compliance agency staff and
business support agencies to a lesser extent.

Other Interested parties – what do the findings tell other groups about the project? Is this a
good place to invest in? Is setting up a business straightforward? How long does it take to
register a business now? The findings may be of interest to researchers, business

159 Addis Ababa University/School of Commerce


development practitioners, consultants, potential business owners or investors – both in the
country and in other countries.

How best to communicate?


How are the findings going to be presented how will people find out about them? Different
stakeholders, by the very fact of what they do and where they are, will use different means
of communication to find out about things. Whilst government ministers and officials and
development partners will tend to be comfortable with detailed written reports, other
stakeholders, such as business owners especially those in small businesses, are unlikely to
have the time, the literacy skills, or indeed the interest to wade through what they would
regard as boring paperwork even if they were able to have access to full technical reports.

The lesson is to use a variety of different forms of communication for disseminating


evaluation findings from formal written reports through electronic newsletters to
conferences and competitions – there are a wide range of media through which to
communicate M&E findings and good practice. Table 5.4 below gives some thoughts on what
and how to disseminate M&E findings to the four stakeholder groups discussed above.

Table 5.4: Disseminating Findings to different Audience Groups


Target Audiences/key message How to disseminate
Accountable – development partners,  Written reports.
government ministers, government officers,  Executive summary briefing notes
key business organizations.  Presentations.
 Discussions over ‘strategic cups of
Key messages – easily digestible facts and coffee’.
figures about what has been achieved,  Official visits to the ‘one stop shop’ out
proving change and relating it to of town.
intervention.  Leaflets and promotional material.
 An annual ‘State of the Project‘report.
 Web sites and electronic reports.
 Media reports/showing changes,
heralding success - newspaper, radio, TV.
Beneficiaries - the private sector either  Briefing notes.
directly or through their associations,  Presentations to associations.

160 Addis Ababa University/School of Commerce


chambers, and trade associations  Official visits to the ‘one stop shop’ out
of town.
Key message - How doing business is now  Briefing note of SME feedback.
easier, quicker and cheaper – so do it!  Leaflets and promotional material.
 Media reports/programs showing
changes heralding success - newspapers,
radio, TV.
 Newsletters- hard copy & electronic.
 Web sites & electronic reports.
The implementers – primarily government  Written reports.
officers, compliance agency staff and  Committee papers.
business support agencies  Briefing notes for staff meetings.
 Presentations to staff.
 Feedback at staff appraisals.
Key message – Key milestones achieved -
 Organization Intranet /website.
where efforts have made a difference.
 Leaflets and promotional material.
 Media reports newspaper, radio TV.
 Internal staff newsletters.
Interested parties – researchers,  Written reports.
business development practitioners,  Executive summary briefing notes.
consultants, potential business owners or  Presentations – conferences /business.
 Seminars.
investors in the country, the media,
 Discussions over ‘strategic cups of
development partners and governments coffee’.
elsewhere.  Official visits to the ‘one stop shop’ out
of town.
Key message - A successful project has been  Leaflets and promotional material.
achieved and the project is better.  Media reports/showing changes
heralding success- newspaper, radio TV
 Research journal papers.
 Case studies.
 An annual ‘State of the Project’s report.
 Newsletters- hard copy & electronic.
 Web sites & electronic reports.

How to ensure inclusion?


Throughout the whole process of designing, implementing and managing the practice of
project M&E there should be an ongoing diversity/inclusion prompt that operates at each
stage to ensure that issues and concerns of diversity and inclusion are considered and
addressed where ever possible.

161 Addis Ababa University/School of Commerce


Most of the projects are what might be termed mainstream interventions. They are aimed at
private sector development in general. However there are a multitude of different
stakeholders who make up or who are involved in the private sector. Not all of these
different stakeholders experience the BE in the same way, with some finding it more
‘disabling’ or ‘enabling’ than others. Similarly, not all groups stand equally in having their
voices and needs heard.

Activity 2
Answer the following questions.
1. Reporting can be costly in both time and resources and should not become an end in
itself, but serve a well-planned purpose. Therefore, it is critical to anticipate and carefully
plan for reporting. Briefly discuss key reporting criteria to help ensure its usability.
2. An essential condition for well-formulated recommendations and action planning is to
have a clear understanding and use of them. Differentiate among the different data
analysis terms such as such as Findings, Conclusion, Recommendation and Action.
3. As with the reporting formats themselves, how reporting information is disseminated will
largely depend on the user and purpose of information. There are several media to share
information but describe some of them.

4. The overall purpose of the M&E system is to provide useful information. Therefore,
information utilization should not be an afterthought, but a central planning
consideration. There are many factors that determine the use of information. Identify and
describe stakeholder informational needs.

162 Addis Ababa University/School of Commerce


Summary

 M&E should be fully integrated into project cycle and project management systems from
the start.
 PMs must have an integral role in designing and planning M&E. PMs may not be
responsible for all M&E tasks.
 Identify the key questions to be asked and answered by the M&E early in the process.
 Milestones and operational plans should be developed in a participatory way with
representatives of the partner organizations;
 Effective communication can build support for the process of change, accelerate
acceptance and contribute to the sustainability of a reform.

163 Addis Ababa University/School of Commerce


Self Assessment Questions-5

Dear learner, if you understood this unit very well, attempt the following questions and
evaluate yourself against the answers given at the end of this unit.

1. M&E plan is a table that builds upon a project/programme’s log frame to detail key M&E
requirements for each indicator and assumption. M&E plan is sometimes called different
names by various users, such as an “indicator planning matrix” and a “data collection
plan”. While the names (and formats) may vary, the overall function remains the same –
to detail the M&E requirements for each indicator and assumption. Develop your own
M&E activity planning table for a particular project.
2. Most M&E reporting will be undertaken through the organization’s project management
systems. However, the timing of reporting should be planned. Develop a Reporting
schedule for any virtual project.
3. Different stakeholders provide guidance for project/programme reporting format. The
purpose of the reporting format is to emphasize key information to inform
project/programme management for quality performance and accountability. As a
stakeholder of a project, propose any project/programme management report format.
4. It is important that report formats and content are appropriate for their intended users.
How information is presented during the reporting stage can play a key role in how well
it is understood and put to use. Briefly discuss some practical tips to help make your
written reports more effective.

164 Addis Ababa University/School of Commerce


Answer Key to Activities and Self Assessment Questions

Activities

Activity 1

1. There is no set formula for determining the budget for a project/programme’s M&E
system. During initial planning, it can be difficult to determine this until more careful
attention is given to specific M&E functions. However, an industry standard is that
between 3 and 10 per cent of a project/programme’s budget be allocated to M&E. A
general rule of thumb is that the M&E budget should not be so small as to compromise
the accuracy and credibility of results, but neither should it divert project/programme
resources to the extent that programming is impaired. Sometimes certain M&E
functions, especially monitoring, are included as part of the project/programme’s
activities. Other functions, such as independent evaluations, should be specifically
budgeted.
2. Monitoring the project costs enables an assessment of whether the project is
operating within the approved budget. One of the most common methods of
monitoring project costs is simply to compare the amount spent on producing a
deliverable at a point in time compared to the budgeted spend at the same point. This
however makes the implicit assumption that production of the deliverable is in line
with the schedule. This potential abnormality can be overcome through the use of a
technique called Earned Value which takes a three-way view of planned achievement
and cost with actual achievement and cost. This technique is discussed in the Project
Cost Management. The Earned Value approach may provide the Project Office and
Project Manager advance warning that an individual deliverable may not be produced
within the expected budget or that the project as a whole may not deliver within an
agreed budget. Typically, there will be a pre-defined tolerance, within which there is
no need to escalate or formally report on a budget variation. An alternative approach
is to adopt a process whereby the producer of each deliverable periodically updates
the estimate of the time and/or effort required to complete an activity (often referred

165 Addis Ababa University/School of Commerce


to as Estimate to Complete or ETC). By comparing the project budget to the sum of
effort to date and ETC, again, there is advance warning of possible over-run. Whilst
there will always be concerns over work that is over-budget, the Project Manager
should not be complacent about potential under spend as this may also be indicative
of other issues such as poor quality, incomplete deliverables, poor estimates or
incomplete cost recording. Additionally, any known factors that are likely to impact
future costs should be included in this monitoring process. For high-risk projects,
monitoring a budget line item for project contingency should be considered.
Background detail for developing project costs and specific budget line items is
discussed in Project Cost Management.

Activity 2

1. Criteria of good reporting


Criteria of good reporting
Relevant and useful. Reporting should serve a specific purpose/use. Avoid excessive,
unnecessary reporting – information overload is costly and can burden information
flow and the potential of using other more relevant information.
Timely. Reporting should be timely for its intended use. Information is of little value
if it is too late or infrequent for its intended purpose.
Complete. Reporting should provide a sufficient amount of information for its
intended use. It is especially important that reporting content includes any specific
reporting requirements.
Reliable. Reporting should provide an accurate representation of the facts.
Simple and user-friendly. Reporting should be appropriate for its intended audience.
The language and reporting format used should be clear, concise and easy to
understand.
Consistent. Reporting should adopt units and formats that allow comparison over
time, enabling progress to be tracked against indicators, targets and other agreed-
upon milestones.
Cost-effective. Reporting should warrant the time and resources devoted to it,
balanced against its relevance and use.

2. Comparing data analysis terms: findings, conclusions, recommendation and actions.

166 Addis Ababa University/School of Commerce


Comparing data analysis terms: findings, conclusions, recommendation and actions
Term Definition Examples
Finding A factual statement based  Community members reported
on primary and secondary daily income is below US$ 1 per
data. day;
 Participants in community focus
group discussions expressed that
they want jobs;
Conclusion A synthesized (combined)  Community members are
interpretation of findings. materially poor due to lack of
income-generating opportunities.
Recommendation A prescription based on  Introduce micro-finance and
conclusions. micro-enterprise opportunities for
community members to start up
culturally appropriate and
economically viable income
generating business.
Action A specific prescription of  By December 20x1, form six pilot
action to address a solidarity groups to identify
recommendation. potential micro-enterprise ideas
and loan recipients.
 By January 20x1, conduct a market
study to determine the economic
viability of potential
microenterprise options.
 Etc.

3. Key mediums of information dissemination


Key mediums of information dissemination
1. Print materials distributed through mail or in person.
2. Internet communication, e.g. e-mail (and attachments), web sites, blogs, etc
3. Radio communication includes direct person-to-person radio (ham radio), as well
as broadcasting radio.
4. Telephone communication includes voice calls, text-messaging, as well as other
functions enabled on a mobile phone.
5. Television and filmed presentations.
6. Live presentations, such as project/programme team meetings and public
meetings.

167 Addis Ababa University/School of Commerce


4. Key categories of information use

Key categories of information use


1. Project/programme management – inform decisions to guide and improve
ongoing project/programme implementation.
2. Learning and knowledge-sharing – advance organizational learning and
knowledge-sharing for future programming, both within and external to the
project/programme’s implementing organization.
3. Accountability and compliance – demonstrating how and what work has been
completed, and whether it was according to any specific donor or legal
requirements, as well as others’ international standards.
4. Celebration and advocacy – highlight and promote accomplishments and
achievements, building morale and contributing to resource mobilization.

Self Assessment Question-5

1. M&E activity planning table


M&E activities/events Timing/frequency Responsibilities Estimated budget
Baseline survey
End line survey
Midterm evaluation
Final evaluation
Project monitoring
Context monitoring
Beneficiary monitoring
Project/programme
management reports
Annual reports
Donor reports
M&E training
Etc.

168 Addis Ababa University/School of Commerce


2. Reporting Schedule

Report type/ Frequency Audience/ Responsibility Format/outlet


event (deadlines) purpose

3. Project/Programme Management Report


Project/programme Management Report
1. Project/programme information. Summary of key project/programme information,
e.g. name, dates, manager, codes, etc.
2. Executive summary. Overall summary of the report, capturing the project status
and highlighting key accomplishments, challenges, and planned actions.
3. Financial status. Concise overview of the project/programme’s financial status
based on the project/programme’s monthly finance reports for the reporting
quarter.
4. Situation/context analysis (positive and negative factors). Identify and discuss any
factors that affect the project/programme’s operating context and
implementation (e.g. change in security or a government policy, etc), as well as
related actions to be taken.
5. Analysis of implementation. Critical section of analysis based on the objectives as
stated in the project/programme’s logframe and data recorded in the
project/programme indicator tracking table (ITT).
6. Stakeholder participation and complaints. Summary of key stakeholders’
participation and any complaints that have been filed.
7. Partnership agreements and other key actors. Lists any project/programme
partners and agreements (e.g. project/programme agreement, MoU), and any
related comments.
8. Cross-cutting issues. Summary of activities undertaken or results achieved that
relate to any cross-cutting issues (gender equality, environmental sustainability,
etc).
9. Project/programme staffing – human resources. Lists any new personnel or other
changes in project/programme staffing. Also should include whether any
management support is needed to resolve any issues.
10. Exit/sustainability strategy summary. Update on the progress of the sustainability

169 Addis Ababa University/School of Commerce


strategy to ensure the project/programme objectives will be able to continue after
handover to local stakeholders.
11. PMER status. Concise update of the project/programme’s key planning,
monitoring, evaluation and reporting activities.
12. Key lessons. Highlights key lessons and how they can be applied to this or other
similar projects/programmes in future.
13. Report annex. Project/programme’s ITT and any other supplementary information.

4. Report writing tips


Report writing tips
 Be timely – this means planning the report-writing beforehand and allowing
sufficient time.
 Involve others in the writing process, but ensure one focal person is ultimately
responsible.
 Translate reports to the appropriate language.
 Use an executive summary or project overview to summarize the overall project
status and highlight any key issues/actions to be addressed.
 Devote a section in the report to identify specific actions to be taken in response
to the report findings and recommendations and the respective people
responsible and time frame.
 Be clear, concise, avoiding long sentences – avoid jargon, excessive statistics and
technical terms.
 Use formatting, such as bold or underline, to highlight key points.
 Use graphics, photos, quotations and examples to highlight or explain
information.
 Be accurate, balanced and impartial.
 Use logical sections to structure and organize the report.
 Avoid unnecessary information and words.
 Adhere to any corporate formats, writing usage/style guidelines.
 Check spelling and grammar.

170 Addis Ababa University/School of Commerce


Bibliography
A. Books
1. Binh L D, Finkel T(2007): Shifting The Focus To “Quality At Exit” – An Effective Approach
To Improving The Business Environment At The Sub national Level From GTZ Vietnam
Smart Lessons In Advisory Services, IFC .
2. Furman, R, Santillana, M A (2007): How the project evaluation results don’t just go to a
shelf. Business licensing simplification in Lima, Peru. May Smart Lessons in Advisory
Services.
3. Herzberg B (2007): Monitoring And Evaluation During The Bulldozer Initiative - 50
Investment Climate Reforms In 150 Days May Smart Lessons In Advisory Services, IFC.
4. Kaufman, F (2007) “Smart Lesson: Key to Success, A Sound Business Climate Survey”.
May Smart Lessons in Advisory Services, IFC.
5. Kurz, S, and Ruf Y.(2007): Introducing Toolkits To Empower The Intermediary For
Monitoring And Improving The Business And Investment Climate, May , Smart Lessons in
Advisory Services, IFC .
6. Liepina, S, Nicholas, D & Novoseletsky, E (2007): Smart Lessons: Key Benefits of
Enterprise Surveys for Improving the Business Enabling Environment, Smart Lessons in
Advisory Services, IFC.
7. Matzdorf, M (2007): Smart Lessons: Impact Monitoring of the GTZ Program ‘Human
Resource Development for a Market Economy’ (HRDME) in Lao PDR, Smart Lessons in
Advisory Services, IFC .
8. Moullier. T and Hamdy, S (2007): Communication As A Tool In Policy Reform: Getting The
Message Through In Egypt - IFC PEP MENA, May, Smart Lessons in Advisory Services, IFC.
9. Valhaus. M (2007): Participatory Management of Development Results – GTZ BEE
Program in the Philippines, Smart lessons in Advisory Services, IFC.

171 Addis Ababa University/School of Commerce


B. Websites
1. www.oecd.org/dac/evaluation
2. www.sedonors.org/about/default.asp
3. www.enterprise-development.net
4. www.ifc.org/ifcext/sme.nsf/Content/BEE
5. www.gtz.de/en/
6. www.dfid.gov.uk
7. http://www.doingbusiness.org/
8. http://rru.worldbank.org/Toolkits/BusinessInspectio
9. http://rru.worldbank.org/Toolkits/BusinessLicenses/
10. http://rru.worldbank.org/Toolkits/BusinessMembership/
11. http://rru.worldbank.org/Toolkits/Collateral/
12. http://rru.worldbank.org/Toolkits/CorporateGovernance
13. http://rru.worldbank.org/Toolkits/CustomsReform
14. http://rru.worldbank.org/Toolkits/AlternativeDisputeResolution/

172 Addis Ababa University/School of Commerce

You might also like