100% found this document useful (1 vote)
284 views86 pages

Results Based Monitoring and Evaluation

This document provides guidelines for establishing a results-oriented monitoring and evaluation system for projects and programs funded by the European Development Fund (EDF) and STABEX programs in Sudan. It was prepared by the EDF Unit at the Ministry of International Cooperation in Sudan, with technical assistance from a consultant. The guidelines are meant to help EDF/STABEX projects and government ministries design and implement effective M&E systems to track progress towards objectives. It covers topics such as designing an M&E system, establishing indicators, data collection tools, conducting evaluations, and relating evaluation questions to indicators.

Uploaded by

Wondimu Markos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
284 views86 pages

Results Based Monitoring and Evaluation

This document provides guidelines for establishing a results-oriented monitoring and evaluation system for projects and programs funded by the European Development Fund (EDF) and STABEX programs in Sudan. It was prepared by the EDF Unit at the Ministry of International Cooperation in Sudan, with technical assistance from a consultant. The guidelines are meant to help EDF/STABEX projects and government ministries design and implement effective M&E systems to track progress towards objectives. It covers topics such as designing an M&E system, establishing indicators, data collection tools, conducting evaluations, and relating evaluation questions to indicators.

Uploaded by

Wondimu Markos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 86

MIC-EDF UNIT MANUAL WITH GUIDELINES

ON RESULTS-ORIENTED MONITORING AND


EVALUATION OF PROJECTS AND
PROGRAMMES (EDF AND STABEX)

Prepared by:

Dr Luc V. Zwaenepoel
EDF Unit with the support of Particip LT-TA
Ministry of International Co-operation

December 2010 Khartoum, Sudan

The guidelines are prepared by The European Development Fund (EDF) Unit at the Ministry of
International Co-operation (MIC) with the Technical assistance financed by the EDF. The content
is the sole responsibility of the Unit and the Consultant. The content is inspired and in line with the
Methodology on Results-Oriented Monitoring (ROM) and the Evaluation Methodology for
European Commission s External Assistance and all related training materials.

1
2
Background and purpose of manual with guidelines
The European Development Fund (EDF) Unit at the Ministry of International Co-
operation (MIC) is mandated to provide capacity to EDF/STABEX projects and line
ministries in Project Cycle Management (PCM) to enable them effectively prepare,
manage and implement European Union (EU) and other development assistance
programmes/projects.

The purpose of these Results-Oriented Monitoring and Evaluation manual and


guidelines is to provide EDF/STABEX funded Project staff and hosting line ministries
and states with necessary information for designing, managing and supporting the
application of a Results-Oriented Monitoring and Evaluation system for
EDF/STABEX funded development programmes implemented through line ministries
and states.

After 2 years of conducting EDF Unit implementation supervision and monitoring


missions to EDF/STABEX projects implemented by line ministries in 5 states of Blue
Nile, Southern Kordofan, Red Sea, Kassala and Gedariff, the EDF Unit realised that
the EDF/STABEX projects requires guidelines to facilitate a structured way of
designing and implementing Results-Oriented Monitoring and Evaluation. Some
weaknesses have been observed at Country Strategy and programme/project levels
when it comes to designing, preparing for an M&E plan and budget and ensuring
adequate reporting and feedback. Limitations have also been noted in defining and
undertaking roles and responsibilities in monitoring, reporting and evaluation.

As the mid-term reviews and evaluation of EDF/STABEX projects all follow Results -
Oriented criteria in assessing the projects, the EDF Unit considers it important that
projects institute and operationalise Results-Oriented M&E systems at the national
and project levels. The guidelines can also be used as a training resource in
Monitoring and Evaluation and Reporting for line ministries, states and EDF/STABEX
funded projects

The manual will assist EDF/STABEX projects and line ministries at national and
state level to use updated Log frame matrix in designing M&E systems that can track
agreed indicators of achievements of project purpose and results.

3
ABBREVIATIONS

ACP Africa, Caribbean and the Pacific group of countries

CBS Central Bureau of Statistics

CBSA II Capacity Building for the Sudanese Administration II

CRIS Common RELEX Information System (EC)

CSP Country Strategy Paper

CV Curriculum Vitae

DNAO Deputy National Authorising Officer for the EDF

EC European Commission

ECD Delegation of the European Commission

EDF European Development Fund

ESAF Food Security and Agricultural Projects Analysis Services (ESA/FAO)

EU European Union

EUR Euro – the currency of the EU

FAO Food and Agriculture Organisation of the UN

FMARF Federal Ministry of Animal Resources and Fisheries

FMoGE Federal Ministry of General Education

GoNU Government of National Unity

IR Inception Report

LF Logical Framework

LFA Logical Framework Approach

MDG Millennium Development Goals

MIC Ministry of International Co-operation

M&E Monitoring and Evaluation

MoA Ministry of Agriculture

MoAF Ministry of Agriculture and Forestry

MoFNE Ministry of Finance and National Economy

MTE Mid-Term Evaluation

4
MTR Mid-Term Review

NAO National Authorising Officer of the EDF

NGO Non-Governmental Organisation

NIP National Indicative Programme

NSA Non State Actor

OLAS On Line Accounting System (from EC) [Obsolete has been replaced by CRIS]

OO Overall Objective

OVI Objectively Verifiable Indicators

PCM Project Cycle Management of the EC

PE Programme Estimate

PFM Public Finance Management

PLA Participatory Learning and Action

PO Programme Officer

PP Project (Programme) Purpose

PSC Project Steering Committee

ROM Results-Oriented Monitoring by EC

SISFIA-N Sudan Institutional Capacity Programme: Food Security Information for Action
(North)

SPSP Sector Policy Support Programme

STABEX EC System for the Stabilisation of Export Earnings

TA Technical Assistance

TNA Training Needs Assessment

ToR Terms of Reference

ToT Training of Trainers

UNDP United Nations Development Programme

UNICEF United Nations Children’s Fund

WLAN Wireless Local Area Network

5
Table of contents

Overview and Background.........................................................................................................7


Section 1: The Design of a Results-Oriented Monitoring and Evaluation System..................14
Section 2: Baseline Studies and the system of indicators for results.......................................24
Section 3: Toolkit for monitoring and evaluation....................................................................30
Section 4: Preparing the evaluation process and the drafting of Terms of Reference.............35
Section 5: From Evaluation and Monitoring questions to Indicators......................................47
Section 6: Results-Oriented Monitoring (ROM) for EC external assistance (projects and
programmes)............................................................................................................................55
Section 7: Country Programme Evaluation..............................................................................65
Glossary of Monitoring and Evaluation Terms........................................................................73

6
Overview and Background

Pre-amble

The manual is part of the guidelines to assist EDF and STABEX funded programmes and projects in
systematically designing, managing and implementing an M&E system and full reporting. The M&E
system and the reporting practices are embedded in the institutional framework of line ministries in
Sudan.

This manual is following clearly the EC ROM methodology and the evaluation methodology for
European Commission external assistance. The manual is focusing on the systematic approach of
Results-Oriented Monitoring and Evaluation, the planning and management of the process and the
tools and methods used.

The manual has the following sections:

Section 1 describes the design of a Monitoring and Evaluation system;


Section 2 explains the use of baseline studies and the system of indicators;
Section 3 provides a toolkit for data collection in monitoring and evaluation;
Section 4 describes the preparation of an evaluation and the drafting of Terms of
Reference;
Section 5 is the process of going from evaluation questions to indicators;
Section 6 introduces the EC ROM system (external monitoring);
Section 7 gives an introduction to Country Programme Evaluation; evaluation
questions, indicators and judgment criteria;
Glossary of Monitoring and Evaluation terms
Useful websites and sources

What is Results-based management?

When the term Results-Oriented Monitoring and Evaluation is used, there is focus on an assessment
of results-based management of projects and programmes. The introduction of a results-based
approach in project and programme management includes the improvement of management (aid)
effectiveness and accountability by defining realistic expected results (Log frame analysis),
monitoring progress towards the achievement of expected results, integrating lessons learned into
management and reporting on performance.

Definitions

Results-Oriented Monitoring or ROM is the systematic and continuous collecting, analysing and
making use of information for the purpose of management and decision making. Monitoring systems
provide information to the right people at the right moment to help make informed decisions. It
provides an early warning system which allows for timely and appropriate intervention if a project is
not adhering to plan.

Evaluation is a value judgment concerning a public intervention with reference to explicit norms and
criteria. It concentrates on needs, relevance, results and impacts. (See Glossary of monitoring and
evaluation terms)
Evaluation comprises control and monitoring. The latter type of activity, like performance audit,
comprises the study of the implementation process and direct effects, but Evaluation puts special

7
emphasis on the production of results and impacts and how these were obtained (effectiveness).
Evaluation also comprises questions of relevance, utility and sustainability.
Evaluation should be applied to all activities, financed by the EC, but in particular to those that are
directed to external assistance.

Audit is the verification of the legality of procedures and the regularity by which resources are used.
The concept covers traditional financial audit but increasingly also performance audit, the latter being
close to evaluation. Audit involves checking the legality of procedures and the regularity of resource
allocation. The focus is on identifying errors and malfunctions and judging according to criteria and
general standards which are known and specified in advance. This process also makes it possible to
compare different performances.

Reporting;

Linking Monitoring and Evaluation to the logical framework

Monitoring and Evaluation are made against the logical framework (Log frame) of a project or
programme. A Log frame explains the logic of the intervention (inputs-outputs-results-objectives),
progress and results are measured by Objective Verifiable Indicators (OVI) over time. A ROM
exercise is focusing on needs (relevance, quality of design), inputs and outputs (efficiency) and
objectives (effectiveness). Results-Oriented Evaluations are more focusing on precise questions as aid
effectiveness, impact and sustainability. Results-Oriented Monitoring can only address the
potentiality of sustainability and the future impact of on-going projects and programmes.

8
The Logic of an Intervention

Systematic approach in Results-Oriented Monitoring

The Standard Logical Framework Matrix and how it relates to M&E


The primary purpose of M&E is to measure the degree to which an operation design is implemented
as planned and how successfully it achieves its intended results. The operation design describes how
inputs and activities will result in outputs delivered, and how the operation designers believe these
outputs will, in turn, result in the desired outcomes and impacts.
The relationship between each of these levels is described in a logical framework hierarchy for the
operation and represents a hypothesis concerning how the operation, starting with the initial resources
or inputs that are available, will bring about the desired results. When a results-based approach to
design is used, the desired outcomes or impacts are identified first, then the outputs needed to achieve
those outcomes, and then the inputs and activities needed to deliver those outputs. The logical
framework approach produces a matrix which combines the concepts of Results-Based Management
(RBM), results-based operation design and M&E.

The Main Contents of the Logical Framework Matrix


The first and fourth columns articulate operation design and assumptions, while the second and third
columns outline the M&E performance measurement indicators and the means to test whether or not
the hypothesis articulated in the operation design holds true.
Column 1: This column outlines the design or internal logic of the operation. It incorporates a
hierarchy of what the operation will do (inputs, activities and outputs) and what it will seek to achieve
(purpose and goal).

9
Column 2: This column outlines how the design will be monitored and evaluated by providing the
indicators used to measure whether or not various elements of the operation design have occurred as
planned.
Column 3: This column specifies the source(s) of information or the means of verification for
assessing the indicators.
Column 4: This column outlines the external assumptions and risks related to each level of the
internal design logic that is necessary for the next level up to occur.

Systematic approach in the Evaluation process

10
Four main characteristics of the Monitoring and the Evaluation of projects
and programmes (EDF and STABEX financed)

1. Systematic approach

Monitoring and Evaluation can be internally or externally organised. The approach (planning,
implementation, communication and reporting of results and recommendations, follow up) is standard
and used for all EC financed projects and programmes. The systematic approach for EDF and
STABEX funded projects and programmes is therefore standard in most ACP countries. In fact so is
the ROM or the external monitoring, a ten year old system that is outsourced to independent
consultants. For internal monitoring there is a systematic six monthly reporting (Internal monitoring
sheet) by the task manager in the EU Delegation and Project Management Units.
The reporting structure for progress on projects and programmes are the well known Inception
reports, progress reports, ad hoc reports, final reports, mid-term review report, ROM reports, and
Final and ex post evaluation reports. All these types of reporting are described in the sections and in
the Glossary.
It is important to stress that the definition of terms like results, outputs, outcome, impact and potential
sustainability (durability) are well understood as it is consistently used in EC guidelines. Therefore
logical frameworks of programmes and projects and its periodic revision needs to focus on outputs
and outcomes beyond the immediate project activities and deliverables of external technical assistance

2. Log frame and the use of five criteria

The framework for Monitoring and Evaluation is the use of the project Log frame analysis and the
link between 5 criteria and the content of the Log frame.

 Relevance: the relationship between needs and objectives

 Effectiveness: whether objectives have been reached, and expected results attained

 Efficiency : the question of whether effects have been reached at optimal (or, in the absence of a
frame of reference, reasonable) cost

 Sustainability: do effects last after the intervention has stopped?

 Utility-Impact: do effects correspond to needs and problems to be solved?

11
3. System of indicators of results and outcome (all definitions of Monitoring and Evaluation
terms can be found in Glossary)

A second frame work is the system of indicators to measure progress and change. The use of a
baseline study at the start of the intervention can give Objective Verifiable Indicators to measure
progress and results over time. Indicators can be organised in different manners, namely as a function
of:

 How they process information:

Elementary, derived, and composed indicators

 The comparability of information:

Specific or generic indicators; key indicators

 The scope of the information:

Context indicators, programme indicators

 The stage of the intervention they refer to:

Inputs, outputs, results, impacts

 The evaluation criteria:

12
Relevance, effectiveness, efficiency, utility, sustainability

 The way in which they are quantified and used:

Monitoring, evaluation, audit

4. A last characteristic for Monitoring and Evaluation is the standard approach, methods and
the choice of tools. A section will explain in more detail methods and the toolbox, as this is a key
factor for line ministries and EDF and STABEX projects and programmes.

Some chapters will include Guidelines for a Results-Oriented Monitoring and Evaluation
system in Sudan, including the recommendations and results of the Workshop Reviewing M&E
Systems and Reporting Practices of EDF/STABEX supported programmes and projects in
Sudan 19-21st December 2010.
Why use the EC Manuals for monitoring and evaluation?

Most of the materials explained in this manual are prepared by the EC services and are standard
procedures for monitoring (Internal and external) and evaluation (internal and sourced out) of
European Commission s External Assistance. EDF and STABEX funds are an important part of the
financing of External Assistance. EC funds are under the direct management and accountability of the
National Authorising Officer (NAO) in a Decentralised Implementation System (DIS) in Sudan and
all external monitoring and evaluation will follow the methodologies for evaluation and monitoring of
all EC external funding. The approach, methods and tools can be easily trained and used by Sudanese
institutions and organisations.

Most projects and programmes are using the Project Cycle Management and an essential part of this
system is the Logical Framework analysis and the Log frames. The Log frame is a key element in the
design, the implementation, the monitoring and evaluation of programmes and projects. They are used
by most bilateral and multilateral donors. A first important step in implementation monitoring is to
check and review the existing Log frames and to reconstruct the logic of each intervention. This was
also recognised by the participants of the workshop.

Further reading and websites:

http://ec.europa.eu/europeaid/evaluation/methodology/examples/guide1_en.pdf

http://ec.europa.eu/europeaid/how/ensure-aideffectiveness/documents/
rom_handbook2009_en.pdf

ttp://www.undp.org/evaluation/handbook/Arabic/PME-Handbook-Arabic.pdf

13
Section 1: The Design of a Results-Oriented Monitoring
and Evaluation System

Designing a Result-Oriented Monitoring and Evaluation and its reporting system is, first of all the
introduction of a systematic, harmonised approach to monitoring and evaluation and reporting of
results-based managed projects and programmes. It needs a number of interrelated steps. These will
be explained for the monitoring function and the evaluation requirements. These steps are valid for all
existing M and E directorates or units, operational and planned in Sudan.

Results-Oriented Monitoring (Internal monitoring)

Step 1: Create a Function

The monitoring function is a matter of designating responsibility for internal monitoring, not the
organisation of additional units. The functional organisation for all monitoring matters can differ from
line ministries, project management), EU delegations, and other relevant bilateral and multilateral
organisations. In compliance with quality standards, internal monitoring must be suitably organised
and must possess appropriate human and financial resources. Therefore, an Internal Monitoring
Coordinator could be appointed, who is reporting within line ministries towards their institutional
partners such as the Ministry of International Cooperation, the NAO and the EU Delegation (Task
Manager). A project Steering Committee is often in place for EDF and STABEX projects and
programmes having the above mentioned partners as members.

Step 2: Have a mandate for M&E

A monitoring exercise starts with a monitoring mandate for the internal monitoring coordinator and
his team of monitors to prepare the annual monitoring work plan, implement the work plan, manage
the process and assure quality of all needed reports. The following reports are mandatory for these
projects and programmes: inception reports, monthly progress reports, quarterly reports, synthesis
reports, ad hoc reports and final reports. Furthermore, information can be found in external ROM
reports, mid-term reviews, and final and thematic evaluation reports.

Step 3: Have the appropriate Human Resources and budget to finance the internal and external
monitoring function.

The internal monitoring coordinator shall clearly identify human and financial resources, divide these
resources and allocate, define missions, follow the procedures.

Step 4: Decide on what needs to be monitored, define selection criteria for projects and
programmes to be regularly monitored

Define selection criteria for internal monitoring, timing during project cycle, selection of projects that
are off-track, sectoral monitoring. The final decision as to what needs to be monitored by relevant
institutional structures is made by the internal monitoring coordinator and the steering committees. It
is important to coordinate with the NAO, the EDF unit and the EU delegation and to have a clear view
on the yearly external ROM mission programme. A section is reserved on external ROM; the
monitoring process used for all EC financed projects and programmes in the world.

Step 5: Make an annual rolling work plan for monitoring missions and reporting

14
Make a rolling work plan over 1 year period, with projects for monitoring and re-monitoring. Provide
5 days per project and two days for reporting.

Step 6: Respect the timing for monitoring missions

Make sure that monitoring missions are fixed well in advance and agreed with relevant partners and
project management.

Step7: Reporting to communicate results and challenges

A regular communication of results in interim and annual reports can make a useful input for the
decision maker and management. Reports can be used for project and programme steering committees
to define remedial and corrective actions or be used for future programming. The reports are very
important to report to steering committee: if projects are implemented following the work plan, or
have difficulties or are off-track. The internal monitoring sheet as described in section 6 can be
modified and used in the different line ministries. In the case that an MIS system is available, all
projects and programme reports and the six monthly monitoring reports can be regularly updated and
shared with relevant partners. Lessons learned, proposed remedial actions and recommendations need
an operational follow up and can be checked during the next monitoring mission. Project quality
sheet and tracking sheets for contracts and programme estimates can be added to complete
the monitoring.
Development of checklists for processing/approval of procurement/tendering process
documents, contracts and programme estimates and addenda, payments, de-commitments and
closures are formats to be used for contract and implementation monitoring (see section 8),
not to be mistaken for Results-Oriented Monitoring.

Evaluation system (internal and external evaluation)

The design of comprehensive evaluation systems do have similarities but are different in scope as was
explained earlier. In this manual the evaluation process concerns all project and programme activities
co-financed through and with EDF and STABEX funds.
The main goals of this type of evaluation are to:

 Contribute to the design of interventions and policies


 Verify that actions respond to needs
 Improve the quality of interventions
 Optimise the allocation of resources

Evaluation types

Evaluation can be performed at different stages of the project or programme life cycle.

15
Ex Ante evaluation: takes place before an intervention starts during the identification and
formulation stage. Often the analytical process is the result of a feasibility study, an Impact
Assessment process.

Intermediary evaluation takes place in the middle of on-going interventions, especially to eventually
redirect them when the project is off-track.

Final evaluations are made before the end of activities of a given project and programme.

Ex Post evaluations takes place after termination of the intervention. Two to three years after
implementation so that the impacts have had time to materialise. Their durability and sustainability is
taken into account.

Different steps to establish a comprehensive Evaluation system


Step 1: Profile of an evaluation function

To set up an evaluation function a clear decision is made on designating a responsibility. The structure
and functional organisation of the evaluation function can differ from one line ministry or
organisation to another. As for the monitoring function, also human and financial resources needs to
be in place, a clear definition concerning evaluation missions, responsibilities and procedures for all
protagonists have been clarified.

Step 2: Responsibilities of the evaluation function:

 The overall coordination and follow up of evaluation activities (from planning to reporting
and use)
 Promote quality and organisational learning through evaluation results
 Help the other services to implement the evaluation policy

Step 3: Managing evaluation activities

During the planning and programming process of evaluations the following questions need an answer:

 Why evaluate these projects and programmes?


 What should be evaluated?
 When to evaluate?

Step 4: The evaluation mandate

An evaluation starts in principle with an evaluation mandate. This document describes the context of
the evaluation as well as the motivation for the evaluation, its objectives and the time table.

It may comprise guidance on:

 The preparation of the evaluation


 Its implementation (external, internal, inhouse experts together with external consultants)
 The decision making process, the evaluation steering committee, the evaluation manager
 The realisation of the evaluation (budget, tendering, contracting of consultancy)
 The way in which reports will be approved

16
 The way in which results shall be communicated
 The deadlines
 The quality criteria

Step 5: Role of the evaluation manager, the evaluation steering committee

The evaluation manager is the manager of the evaluation project. He/she is a member of the
evaluation function and organises the evaluation process. This can be done internally with other in-
house evaluation experts, or it can be sourced out. In the latter case the manager will prepare for a
bidding process for an evaluation with external consultancy.
The evaluation manager will report to an evaluation steering committee. Members of this committee
are high level experts with knowledge of the specific sector and its activities. They are not part of the
evaluation activities but are responsible as per the mandate to follow up on procedures and to validate
the procedures and reports.
The evaluation manager is responsible for all tasks and responsibilities before, during and after the
evaluation process. He/she will together with the Steering committee draft the Terms of Reference of
the evaluation project. More precise information is given in a later section.

Step 6: Systematic approach in designing an evaluation process


The evaluation of projects and programmes is based on the utilisation of a system of evaluation
criteria and indicators, this will be explained in another section.
Indicators generally relate to different levels of objectives, hence there are operational, specific and
global indicators.

Impacts
Global
objectives
Effects

Specific
Results
objectives

Outputs Operational
Réalisations objectives
Implementation

Objectives
Inputs objectives

For EC funded programmes an organisation of indicators at three levels is used.

 The level of the programme as a whole – corresponding to the global objective


 The level of individual measures, the ‘measure’ being the basic unit of the programme
management

17
 The project level, the ‘project’ being the basic unit of programme implementation

Step 7: Making evaluation questions related to what needs to be evaluated, choice of approach.

Evaluation questions are derived from the evaluation criteria: relevance, effectiveness, efficiency,
sustainability and impact. The first step during an evaluation is to reconstruct the logic of the
intervention by linking the objectives to expected impacts, and by identifying relevant evaluation
questions. A later section will explain the drafting of evaluation questions and the Terms of
Reference.

Step 8: Drafting the evaluation Terms of Reference (section 6)

The Terms of Reference explain to an evaluator what is expected from the evaluation and on which
information and other supports he/she can count. The evaluation criteria form the basis for any
evaluation.
Questions should be derived from the evaluation criteria and should be limited in number, targeted
and prioritised within the Terms of Reference.

Step 9: Use of indicators leading to evaluation questions

The system of indicators is crucial to measuring progress and results over time. An analysis of
Objective Verifiable Indicators from the Log frame is made and other indicators can be required from
the evaluation team.

Step 10: Define methodology and tools for data gathering and analysis

The ToR will, especially when the evaluation project is outsourced, have a section explaining the
methodology and approach to be followed in data collection and analysis.
The next section will give broad guidelines on the data collection and analysis methods to be followed
by the evaluators. In the case of a tender for external monitoring, it is up to the contractor to fine tune
the suggested approach in discussion with the steering committee and the evaluation manager.
A classic evaluation will be implemented in six stages: reconstruction of the intervention logic, basic
data and information-gathering, structural surveys, in-depth interviews, case-studies and analysis and
assessment.

Step 11: The evaluation process as designed by EC

Schematically an evaluation is organised in the following steps.

18
Error: Reference source not
found

The evaluation Process

The person in charge of the evaluation will establish an evaluation process which should at least in a
synthetic form figure in the evaluation mandate (see above).

This evaluation project comprises:

• the context and motivation for the evaluation (obligation…)


• to whom it is destined / who should use the results
• the area of application of the evaluation
• the key questions
• the available information available
• reports expected to be published
• the time table
• contractual, administrative and financial information in the case of an external
evaluation.
Step 12: Follow up of the management of evaluation process
The Evaluation Steering Committee or the Evaluation Reference Group is created for all evaluations.
It will have an active role from the outset of the evaluation and will intervene during the whole
evaluation process. Its role is:

 To provide methodological assistance for the evaluation process


 To assess the quality of the work of the evaluator at different key moments
 To assist in dissemination of results
 To facilitate access to important material and information

19
Error: Reference source not found

An interactive and co-


Responsible Process
Evaluator

Conduct the Produce the


evaluation final report

Accompany the evaluation

Prepare The Approve the reports


Termsof
Reference
Steering Committee
Communicate
the results

Step 13: Quality check of evaluation reports, consultancy services

The evaluation manager and the Steering committee will assess the quality of the draft final report.
They will evaluate the report on the following criteria:

 Relevance of the content


 Adequacy of the used methodology
 Reliability of the data
 Solidity of the analysis
 Credibility of the results
 Validity of the conclusions
 Usefulness of the recommendations
 Overall clarity.

Step 14: Communication and reporting of results

The dissemination of results is organised, and sent to the different target groups. The evaluation
reports are communicated with the responsible, project or programme managers, the decision makers,
institutions, the beneficiaries and stakeholders.

20
Source: Eastern Recovery and Development Programme, Monitoring and Evaluation Workshop, December 2010

Guidelines

( Based on the Workshop Reviewing M&E Systems and Reporting Practices of EDF/STABEX supported
Programmes and projects in Sudan 19-21st December 2010, Findings and Recommendations)

What are the elements and parameters to look for when reviewing a Log frame?

When we analyse a Logical Framework Matrix we evaluate the following:

Step 1. Check whether the Project Purpose is formulated as ONE objective, which describes why the
beneficiaries need the project? It is an objective (= positive, thus not formulated with 'reduced' ...),
which is achieved by the beneficiaries themselves by making use of the Results made available by the
project and the prevailing Assumptions. The purpose speaks about the 'utility' level.

The purpose level reflects the RELEVANCE of the project. Although the matrix does not reveal the
problems, it should be a mirror image of the problem analysis. Examples are: Improved income,
enhanced performance, assured security, revived business, secured employment, integrated in society,
etc.

The purpose will have to be specified by an OVI (Objectively Verifiable Indicator): a number of
parameters with target values (now and later) indicating the change the problem situation will have.
The indicator should provide an explanation on what is actually meant by the objective. It should
explain the change over time in quantity and qualitative description or typology of the verb (what it

21
means by 'improved', 'secured'?), the subject (what is 'performance', 'employment'?) and the
beneficiaries (for 'whom' and by 'who'?).

Mostly the beneficiaries are people with special needs in society. That is mostly also the interest of
the funding agencies, as public money should benefit society. But the beneficiaries could also be staff
in organisations, institutions, departments and units. A distinction is made between final beneficiaries
or end-users from direct beneficiaries or target group.

Quite often 'Log frames’ have several purposes formulated, which if you analyse them appear to be
either results or even activities. In that case you need to reorganise the Log frame and place the right
objectives at the right levels.

Next, we examine the Overall Objectives. These are less important for the project itself but provide
information on the context of the project. They inform us WHY the project is IMPORTANT to
society. There can be several. Preferably each stakeholder will relate to one or more OOs as they want
to see how the project contributes to their wider objectives. Some donors like to see one or more
Millennium Development Goals listed among them. Just check whether these are reasonably
connected to the purpose. The Assumptions at Purpose level are relevant for the Purpose to contribute
to the Overall Objectives but these are far outside the scope of the project and thus not really very
important to the project design. They also determine the context in which the project is situated.

Step 3: Then we check the Results or Outcome! Of course these outcomes are the next most
important of the Logical Framework Matrix because the project is responsible for making these
available at the beneficiaries.

• These outcome should be formulated as 'services', 'products' or 'deliverables' received by


beneficiaries. Examples are e.g.: 'Knowledge acquired', 'ability to produce enhanced', 'access to ...
finance',

• Each Result must be quantified and qualified with OVIs (Objectively Verifiable Indicators). Only
then do these become sufficiently specific in order to understand what they mean.

Step 4: Results plus Assumptions (at the same level in the matrix) should present a comprehensive
package enabling the beneficiaries to make use of them and reaching the Purpose.

Often we see Results formulated as an objective, but actually being an Activity. These are also called
Outputs, like e.g.: 'Training organised', 'Wells provided', 'Information disseminated', 'Rural banks
established', implying a benefit, but not making it explicit. We discover the difference between an
Output and an Outcome by checking whether the objective can be done (Output) or only achieved
(Outcome).

Step 5. As already mentioned the Assumptions at Result level are important to position the services
made available by the project and the other services required to benefit from them. Assumptions are
also positive objectives to be achieved and made available to the beneficiaries by sources other than
the project.

Most Logical Framework Matrixes miss Assumptions. People tend to think that the more
Assumptions are mentioned, the riskier the project. However the opposite is true. If you mention these
you can monitor and anticipate them whereas if you don't mention them they show up by surprise ...
and can damage the project success.

Step 6. The Activities and the corresponding Assumptions.

22
The Assumptions at the Activity level are most important because these affect the Results for which
the project is responsible. Again, these Assumptions are often ignored but they are crucial to assess
the potential effectiveness of the project. Monitors often need to think 'out of the box' to imagine the
situation of the beneficiary and discover important Assumptions.

The Pre-conditions are objectives that must be in place before the Activities can start. We usually see
Pre-conditions at the beneficiaries (e.g. 'beneficiaries are prepared to pay for services' or 'ownership
assured') and at the service deliverers or 'suppliers' (e.g.: organisation able and qualified to implement
the Activities'; 'contract signed'; 'funds available'; 'supportive policy').
A properly managed project preparation phase can be mentioned as a Pre-condition.

23
Section 2: Baseline Studies and the system of indicators
for results

Definition of the Baseline Study

A baseline study is made at the beginning of the interventions. It gives basic indicators of the actual
state of the overall situation, the need to make progress and change by intervention; it measures the
actual state of the situation on site. Baseline studies for EDF/STABEX projects are conducted during
the design phase. Some baseline information is available, but for some projects it is necessary to
conduct or update a baseline prior to a new project intervention. Often this will emerge during the
inception phase of the project when the whole project situation and the Log frame is updated and
reviewed prior to full implementation.

Example in Sudan:

SIFSIA-N has contributed a food security module to the national integrated household survey under
preparation by the Ministry of Finance & National Economy (Poverty Reduction Strategy Unit), the
Central Bureau of Statistics and the Ministry of Social Development & Welfare. SIFSIA-N drew
upon technical assistance from FAO headquarters (ESAF and ESS to develop the food security
module. The survey (funded primarily by the African Development Bank (ADB) is implemented by
CBS in both northern and southern Sudan between March-April this year. It will provide very useful
baseline information on poverty since it will generate household budget and expenditure data and
analysis in both rural and urban areas, including the transfer of remittances which is an area little
understood, for the first time on a nationwide basis.

A baseline study is organised to have a precise measurement of the different chosen baseline
indicators.
Baseline indicators reflect the state of economic, social and environmental situation at a given time, at
the beginning of the intervention, against which changes will be measured. This collection of data can
then be compared with a study of the same characteristics carried out later in order to see what has
been changed.
There are two types of baseline indicators: context baseline indicators and impact related baseline
indicators.

Context and programme indicators

Context indicators are used for an entire territory, population or population category. They do not
apply to the implementation of the programme and its effects. They always apply to the entire eligible
territory or target public, making no distinction between those who were affected by the programme
and those who were not.
In contrast, programme indicators concern only the part or category of the public or the part of a
territory which was actually affected. Their aim is to trace the direct and indirect effects of the
programme as far as possible.

24
In the context of monitoring and evaluation, a programme indicator can show that a specific
intervention is a success or that another is a failure. In contrast, a context indicator can show
that a specific intervention is still relevant, or that another no longer has a raison d’être.

Indicators of resources, implementation, results and impacts

Definition of the indicators according to objective levels

Objective level Type of indicator Definition Key stakeholders

Organisational Resources Means made available by Financing parties


objective financing parties and used by and operators
operators for their activities

Operational objective Implementation Product of the operators’ Operators


activity. That which is
obtained as a result of public
expenditure

Immediate specific Result Immediate effect for direct Direct


objective beneficiaries beneficiaries

Lasting specific Specific impact Lasting effect for direct Direct


objective beneficiaries beneficiaries

Final specific Overall impact Overall effect for the entire Direct and
objective population concerned (direct indirect
and indirect beneficiaries) beneficiaries

“Effect” can be defined as any change caused by the implementation of the programme, whether
direct or indirect, immediate or long-term. The effects therefore cover results and impacts. In all
cases, surveys are useful for observing the apparent effects (gross effects), but not the real effects (net
effects) of a programme.
Account must be taken of windfall effects: a beneficiary may have made a definitive decision at the
time when he discovered that the programme would help him (he benefits from the aid), and there
may be substitution effects (in the case of interventions targeting individuals or groups of individuals)
and displacement effects (in the case of interventions targeting geographical areas).

25
Baseline studies in different contexts and project situations

Before and after Evaluation Design


In the design of a before and after evaluation, baseline studies are a critical element in the formula for
measuring change over time.
A baseline study is required for every type of operation. However, the rigor of the methods used to
establish baseline conditions varies according to the type of operation being implemented. A
compromise must be reached between the need for robust, precise data to establish pre-operation
exposure conditions and the cost of collecting such data in terms of resources (financial, human and
time). Country programmes that are focused on development should invest more resources and, as a
result, conduct more rigorous baseline studies. Development donors generally will establish the
necessary rigor for a baseline study by considering the available resources and the information needs.
However, a minimum standard for establishing pre-operation conditions through baseline studies
should be applied in all development operations.
The baseline study is just one component of the M&E design that outlines the planned M&E data
collection and analysis. The entire evaluation strategy, including the design and budgeting of the
baseline and subsequent studies (mid-term and final evaluations), must be developed during the
planning or design stage of an operation.

When to do a Baseline Study?


In relation to the programme cycle, a baseline study should be conducted prior to the onset of
operation activities in order to establish the pre-operation exposure conditions of the outcome and
impact level indicators. However, it is not uncommon for baseline studies to be conducted after
activities have already begun. It should be noted that, for most operations, there is a delay between
output delivery activities and their measurable effect on outcome and impact performance indicators.
As a result, baseline studies will still provide an accurate estimate of pre-operation conditions even
after the operation has begun, as long as the outcome and impact performance indicators have not
yet been affected. However, this time lag varies from a few days to a few months, according to the
type of operation and the environment in which it is being implemented.
For many operations, it is difficult to estimate exactly how long this time lag will be.
Delays in conducting baseline studies, especially when an operation’s activities have already
influenced the outcome and impact performance indicators, are costly and likely to lead to an
underestimation of the operation’s overall impact. Development operations should therefore aim at
conducting baseline studies before operation activities begin. When this is not possible, baseline
studies must take a high priority and data should be collected very close to the beginning of the
operation.

Organising a Baseline Study

26
A first condition to start a baseline study is to perform an in depth stakeholder analysis for the sector,
the programme, the project under review. The stakeholder analysis can be based on the following
diagram.

Stakeholders Nature of Impact during Initial Role of


Agencies interest assignment expectations stakeholder

The baseline study (will then be based on the free and adequate participation of stakeholders and
focus groups. Several approaches can be used, stand alone or in combination, document review, field
visits, quantitative and qualitative observations, interviews, focus group discussions and geo-data
analysis. Primary data are generated from statistical analysis, if statistics are significant.

Example: The International Labour Organisation (ILO) has organised a worldwide survey in the ILO
organisation about knowledge sharing. The baseline study was based on a worldwide on-line
questionnaire, filled in by ILO staff. The purpose of doing these baseline studies is to find out what
the ILO does well and where there are weaknesses, in order to better focus resources and efforts. The
baseline study tool can be used in the future to measure progress.

Methodology

To conduct a baseline study, a methodology that is participative will be used. The study will take
between 3-4 weeks of fieldwork in addition to analysis of documentation. During fieldwork there are
meetings planned with the local project coordinator, the local partners with responsibility for
implementing the project. Tools used are interviews/ group discussions/ observing the operations of
the project - included as parts of visits to the different geographic locations of the project, visiting
local governmental institutions and organisations. To obtain information the common techniques for
social studies are used: documenting, observing, interviewing, and in addition to this; case studies and
life histories. But concerning the study of the social and economic realities of the beneficiaries and
their families, it is necessary to use an appropriate sample of the population and interview a
representative selection of beneficiaries. Approaches and tools are explained in the following chapter.

What are Uses and Sources of Primary and Secondary data?

Information about indicators and sources for information is found in the Log frame matrix. Data
sources are listed in the second column (Objective Verifiable Indicators) and the third column of the
Log frame shows the Means of Verification. The indicator explains what information will be
collected, the means of verification identifies where the information will come from.
Primary data is collected by using surveys, meetings, focus group discussion, interviews or other
methods that involve direct contact with respondents.
Secondary data is existing data that has been or will be collected. Secondary data can be found in
MTR, evaluation and monitoring reports, data collection by organisation and government, routine data
collected by institutions participating in a project or programme (Health Centres, schools). This is
good secondary data which could not be replicated without high costs, through new baseline studies

Example: During stages of emergency data about emergency food needs is very important. The
Emergency Food Needs Assessments (EFNA) can give immediate criteria for baseline data.

27
The Available Information
Evaluation

Primary sources Secondary sources

Steering Statistical Bibliographical

Surveys, Management Research,


Statistics
case studies, Documents Previous
etc. Evaluations

Preparing the Baseline Study and budget

Once the design and methodological issues are solved, they should be summarised in a study plan and
a budget.

The costs associated with the baseline study needs to be detailed.

Baseline Study plan

Proposed outline

Summary
Back ground and purpose of study
 Description of operational design and target beneficiaries
 The objective
 Data sources
Data collection
 Units of study
 Use of Secondary data
 Primary data collection methods and techniques
 Sampling description
Design
 Questionnaire
 Pre-test
Fieldwork
 Field work team
 Required training
 Time table of fieldwork
 Quality control and supervision
Data processing and analyse

28
 Data cleaning
 Data entry and processing
 Frame work for analysis
 Training in data management
Reporting
 Outline and format Study report
 Presentation and dissemination of results
Annexes
 Budget
 Operational design

Guidelines

( Based on the Workshop Reviewing M&E Systems and Reporting Practices of EDF/STABEX supported
Programmes and projects in Sudan 19-21st December 2010, Findings and Recommendations)

Key Principles in building a Results-based Monitoring and Evaluation System

Results information needs

The first principle is to enhance the information system of the Public Administration with results
information on the project, programme and policy level. This results information must move both
horizontally and vertically in the organisation. This can pose political challenges. The demand for this
information needs to be mapped out clearly, as well as the responsibility at each level:

 What data are collected? (sources)


 When data are collected?(frequency)
 How data are collected? (methodology)
 Who collects data?
 Who reports data?
 For whom data are collected?

This is the first challenge that many organisations and agencies find it difficult to share information
horizontally. Information is already difficult to move vertically, due to strong political and
organisational walls between one part of the system and the other.

Achieving results through partnership

Given the scarce resources and ambitious development objectives, development partners on all levels,
multilateral, regional, country and governmental level need to leverage resources to achieve the
desired goal. When resources for Monitoring and Evaluation are diminished, partners need to find
solutions by combining resources even during times of input constraints. Often programme and
projects have a budget line to finance monitoring, evaluation and auditing.

29
Section 3: Toolkit for monitoring and evaluation

Concepts and phases preliminary to the use of tools


Before looking in detail into different monitoring and evaluation tools, it is necessary to discuss
concepts and principles that will give a good basis of understanding in order to optimally use this
manual.

Monitoring tools and reporting

There is a difference between various forms of monitoring; not all monitoring is results-based.

Activity or performance monitoring (monitoring outputs ):

Activity or performance monitoring has an accounting function keeping track of the activities
completed. Operational accounting have specific tools and reporting formats.

Progress or outcome monitoring (feedback on activities): Progress or outcome monitoring is aimed


at providing feedback on what works well and what not. It helps in revising project strategies and
improving the effectiveness. Impact Monitoring (the ultimate effect of the activities) can be done on a
monthly basis by analysing the different progress reports and the outcome of project activities.

Impact monitoring: is aimed at measuring the ultimate effect of the activities in terms of changes
knowledge and skills (adoption rates). Impact of results and induced changes are measured and
reported.

Financial monitoring: keeps track of expenditures and assesses if they are in line with the budget.
Furthermore, they can be associated with contract management and monitoring.

Results-Oriented Monitoring or ROM: The EC services have coined the word ROM, to indicate
their system of external Result-Oriented Monitoring of all EC financed projects and programmes
(over 1 Million Euro). Full explanation is given in section 6. It is important to stress the importance of
EC-ROM as a reporting tool but also as a Results-based methodology for internal monitoring.

Reports to be used during internal monitoring

• Monthly reports from line ministries: data to be used in Monthly coordination meetings

• Quarterly reports: compilation of 3 monthly reports to inform stakeholders and

Management/SC meeting

• 6-Monthly progress report (contractual obligation of TA service contract)

• Final conclusion report (drafted in final stage of project or programme)

• Monitoring missions report (assessment of project activities on site)

• Mid-term review (an on-going evaluation about project activities at mid stage of project).

30
Some classical tools for data collection during monitoring and evaluation

Tool 1: Participatory Learning and Action (PLA)

Participatory Learning and Action (PLA) is a particular form of qualitative research used to gain an
in-depth understanding of a community or a situation. It is based on the participation of a range of
different people, including people from the community affected by the project or programme. The aim
is for people to analyse their own situation, rather than to have it analysed by outsiders, and for the
learning to be translated into action. This makes it a particularly useful tool for planning, monitoring,
review or evaluation of any kind of community development. It used to be called PRA or
Participatory Rapid Appraisal or Rural Participatory Appraisal and was initially used mainly for needs
assessment in rural communities.

Important features of the PLA

Triangulation: This is a method of cross checking qualitative information. Information about the
same project can be collected in different ways and from at least three sources to make sure it is
reliable and to see whether it is not biased.

Multidisciplinary team: Triangulation is best done by multidisciplinary teams of experts with


different skills, experiences and viewpoints.

Mixing tools and techniques: Using different tools and techniques gives greater depth to the
information collected.

Flexibility and informality: Plans and research methods are semi-structured and revised as the field
work proceeds.

In the community: Most activities are performed jointly with community members or by them on
their own. This makes this tool particularly optimal for monitoring and evaluation of rural
development and community related development projects.

On the spot analysis: The expert team reviews and analyses its finding to decide how to continue.

Tool 2: Group interviews

Group interview is a technique whereby several people with homogenous characteristics participate
and provide qualitative information in a targeted discussion.

This technique was initially used in marketing circles to analyse the impact of publicity and marketing
strategies, and it is particularly constructive in investigating themes which are the subject of diverging
opinions which need to be bridged, or in untangling the threads of complex issues which are the
subject of numerous different interpretations.
This tool enables the collection of the perceptions of all those concerned by a project or programme
through the application of group participation techniques.

Strengths and weaknesses

31
In a relatively short time-frame, this technique enables the collection of a large amount of in-depth
qualitative information concerning the opinions and values of those interviewed.
Grouping several people together encourages a general position to emerge, avoiding extreme
opinions; the group provides a kind of “social quality-control”.

Tool 3: Questionnaire surveys

The questionnaire survey technique was developed by opinion poll institutes between the wars and is
often used today. It is based on standard questions asked of a sample of individuals who are
representative of a population or, occasionally, the entirety of a population.
When applied to the field of evaluation, this tool serves mainly to collect information. The questions
should be associated with descriptions, standards or causal links.

Strengths and weaknesses: One of the strengths of this type of information collection is that large
numbers can be covered, making it a good tool for the implementation of quantitative analysis.
Nowadays there is both hardware and software which enables standardised and rapid processing of
responses.

Tool 4: Individual interviews

Individual interviews are a favourite qualitative technique aiming to collect personal opinions and
information on a specific project or programme, concerning everything from context to
implementation, impact or results. Individual interviews can come in various forms, often used are the
semi-directive interviews.

It can also be used in cases where a statistical study would be technically impossible, or would not be
representative.

Strengths and weaknesses

This technique is simple and transparent, and is popular in finding a consensus in particularly
sensitive areas or areas of conflict.
The results are immediately clear to all, and can be used to highlight the more striking areas of
disagreement or those requiring additional information.

32
Tool 5: Logical framework analysis

Management By Objectives (MBO) applied to project or programme design, monitoring, and


evaluation. This approach consists of four steps: (1) establishing objectives, (2) establishing cause-
and-effect relationships (causal linkages) among activities, inputs, outputs, and objectives, (3)
identifying assumptions underlying the causal linkages, and (4) identifying objectively-verifiable
measures for evaluating progress and success. It gets its name from the 4 x 4 matrix (frame) employed
in its mapping: the columns (which represent the levels of project or programme objectives) are called
vertical logic, and rows (which represent measures for assessing progress) are called horizontal logic.
Also called logical framework method. (see also section 8)

Tool 6: SWOT inventory and analysis

SWOT analysis (Strengths, Weaknesses, Opportunities and Threats) is a classic tool in strategic
analysis which has been developed since the 1950s, and is a support technique to decision-making
which focuses on strengths and weaknesses (internal perspective) and opportunities and threats
(external perspective).
In the initial stages of the analysis of a situation, SWOT analysis enables strengths and weaknesses of
an organisation to be identified, so that the determining and prevailing influential factors can be
highlighted. Relevant strategic lines can be developed from the project/environment (or
programme/environment) system of relations.

Strengths and weaknesses

The undeniable strength of this tool is its relative simplicity of use, which enables us to break down a
situation and establish an initial list of issues. Its weakness also lies in this simplicity, which may lead
to analyses and conclusions which are too hasty and too subjective. There is, for example, no
weighting of the items entered into the chart

Tool 7: Multi stakeholder analysis

33
This tool can be used to gather many ideas quickly from a group of people by letting them freely
express their creativity and critical thoughts. It can be often used as a first step in a discussion that is
then followed by other methods. In principle, brainstorming can be done individually or in a group.

Strengths and weaknesses

It's a quick and enjoyable process. It stimulates involvement and cross-fertilisation of ideas. However,
most ideas are contributed from a few quick-thinking people. The method can work with small or
larger groups and can take as little as five minutes, depending on the subject, detail needed and
number of people. This method is commonly used in combination with other methods, for example, to
start a focus group session.

Tool 8: Case studies

This technique is among the least standardised and offers the option of various approaches. In basic
terms, case studies are based on in-depth analysis of data collected on a specific case. The techniques
for collecting the data are both quantitative and qualitative.

Case studies can serve various purposes:


 The case should be representative of the reality of the programme or a specific point of
the programme, exploration, critical analysis, implementation analysis, impact analysis,
etc.
This technique has been widely used in evaluation over the last ten years, and had even tended to be
used instead of large-scale quantitative surveys in certain contexts.

Strengths and weaknesses

Case studies provide considerable illumination in complex areas, and involve awareness of the
concrete application of programmes on the part of decision-makers who are often far-removed from
the reality of implementation.
The cost of this approach necessitates a restricted and relevant choice of cases to be studied. This
approach is less appropriate when it comes to measuring the extent of impact or inferring causality.

Further reading:

MSP resources portal

http://portals.wi.wur.nl/msp/

34
Section 4: Preparing the evaluation process and the
drafting of Terms of Reference
As we have seen in the other chapters, the most important task for an evaluation manager, in close
collaboration with the Evaluation Steering Committee, is to draft the Terms of References of the
Evaluation process. This chapter will explain how appropriate Terms of References are drafted.

Evaluation process

Evaluation is a management tool that helps in the decision making processes. Evaluation proposes
objective judgments referring to explicit norms and criteria which help to improve the quality of the
EDF and STABEX projects and programmes in Sudan.
Evaluation is part of a broader iterative process. Managing this process well is necessary to obtain
good evaluation results.
The process must be conducted correctly if optimum use of the results is to be achieved: it is even
subject to an assessment, as is the quality of the final evaluation report.
A steering committee involved from the design stage through to the dissemination of the results must
therefore be the rule for EDF and STABEX evaluations.
Evaluation questions should be well chosen and well formulated. One should target the questions and
narrow down the scope in order to obtain questions that can be answered and the answers of which are
useful in the decision making process.
The Mandate

Before this, it needs to be clear that a Mandate is given for the Monitoring and Evaluation Unit in the
line ministries to evaluate a given project or programme.

Mandate example:

35
The evaluation process

An evaluation manager for a given project or programme is appointed to conduct the evaluation.
He/she sets up the reference group, writes the Terms of Reference and recruits the external evaluation
team, if needed.

The responsible person will decide who will participate in the three stages of the evaluation:

 The design (methods, tools, evaluation questions, indicators)


 The implementation (internal evaluators, consultants, a mixed team)
 The use (dissemination, programming use, remedial actions)

The evaluation exercise and hence also the Terms of Reference will comprise:

 The context and the reason for the evaluation (obligations, legal context, contractual)
 The target, who is concerned and who will use it
 The available financial resources (budget, costs, time of staff)
 The evaluation field
 The time table
 The key evaluation questions
 Details on the sources of available information
 Reports, quality assurance, validation and timing
 Contractual, financial and administrative information during an external evaluation
 Wider use of results and reports, dissemination and reporting

Requirements and issues to be included in the final drafting of the Terms of References

The evaluation manager will work closely with the Steering Committee to have clear answers on the
following issues:

The context of the evaluation: What is the legal and administrative context of the evaluation? Is it an
obligation from a contractual or financing requirement? Or is it the socio economic context and will
more emphasis be placed on impact and results? Most evaluations are recommended by the Court of
Auditors report, MTR or are simply a requirement by the Financing Memoranda.

36
The scope of the evaluation

The evaluation will focus on the whole intervention or programme, or only on one aspect. There is
reference to all previous evaluation or assessment reports. The focus is also related to the reference
group of beneficiaries and the timing.

Effectiveness:
achievement of
objectives Unexpected
results/impact:
positive,
negative

Relevance: Sustainability:
of project to sustain results
Evaluation after project
needs
focuses phases out
on:

Alternatives
: other ways Design:
to logical &
addressing consistent
problem Causality:
factors
affecting
performance

37
Key evaluations issues

Evaluation field

The evaluation field can be widened or deepened. A decision is made on the geographical area of the
intervention (rural versus urban) and the reference period (programme from 2000-2005).

It should be remembered that the evaluation field can specify a more general evaluation, or a more
subject-oriented and specific evaluation (e.g., female beneficiaries from the Northern Province).

Information available

As was discussed before a stock is made of all primary and secondary information available. An
action is taken to ensure that evaluators do have access to all information even the more sensitive
ones. An inventory of all available sources of information is made and evaluators are authorised to
have access.

Evaluation issues

The choice of having an internal or an external evaluation. When the evaluation is internal, it is
performed by the internal evaluation experts. It has the advantage that there is a direct return, it can be
used for training and organisation learning. Internal expertise and staff is mobilised. Often internal
evaluation is used for ex ante evaluation. To source out the evaluation to external evaluators, can give
more objectivity to the evaluation, it gives more focus to accountability. External evaluation is more
used for ex post and final evaluations. Mobilisation of external evaluators can also impact on the

38
optimisation of scarce resources in the evaluation units of line ministries and Programme
Coordination Units.

Deadlines

A precise time plan and work plan is made and needs to be respected. All partners have been informed
about the upcoming evaluation project and are asked to fully cooperate with the evaluation team.

Quality criteria

The evaluation quality standards for evaluation are respected and all service and reporting are
assessed following a quality evaluation grid at the end of the project

Next steps in designing the evaluation of projects and programmes

The evaluation manager with the designated steering committee will decide, before drafting the Terms
of Reference on evaluation criteria and related evaluation questions.

Evaluation criteria

As far as a good logical framework is available and still valid, the evaluation manager may refine the
issues to be studied into evaluation questions. The five evaluation criteria are relevance, efficiency,
effectiveness, impact and sustainability. Some evaluation projects are focusing on only 1 or 2 criteria,
or have more criteria like utility, cost-effectiveness, ownership. Most evaluation projects starts with
the reconstruction of the intervention logic. The evaluator is expected to reconstruct the original
intervention logic of the project or the programme. This is needed to have an insight into the validity
of the apparent causal assumptions involved.

Food security in
Zimbabwe (DG ECHO)
Operational objectives Specific objectives Overall objective

Assist the food aid emergency operations


for vulnerable groups and support
Support the logistics
the logistics of these operations
of the
World Food Programme

EstablishTechnicalassistance in thefieldto
coordinatethe activties, evaluateneeds,
assessprojectproposals
Preventmalnutrition and famine
Distribute seed and ensurethe monitoring ofoperations
in the most vulnerablegroups
faced withthe food security
Evaluation questions Crisis in Zimbabwe
Support emergency agricultural
Observe and monitor rehabilitation
the living conditions
of the population
Evaluation questions are derived from the evaluation criteria; relevance, effectiveness, efficiency,
sustainability and utility. The Improve
first step of an evaluation is to reconstruct the logic of the intervention
food security in rural
that is evaluated by linking the objectives of the programme to expected impacts, and by identifying
communities

evaluation questions.

39
The reconstruction of the intervention logic is not necessary straightforward since it necessitates the
definition of causalities between the concrete actions that are implemented and the expected results.
The art of evaluation lies here, in the identification of the key themes (i.e. the causal links between
certain intervention factors) and in asking the right questions. The evaluator then uses the collection,
analysis and information summary techniques which isolate the explanations for external factors. In
short, the quality of decisions depends on the evaluation quality.
Specify the questions
The most difficult part is to ask the right questions and to formulate them well. Evaluation questions
should be specified on the basis of the evaluation criteria and the causalities found during the
reconstruction of the logic of the intervention.
Questions can be descriptive (What has happened?), causal (What is the Relationship with the
intervention?), or normative (Is the effect satisfactory?)
Choosing and targeting precisely the questions, is the difficult part. When this exercise is done in a
participative way with members of the steering committee and staff from the evaluation unit, the
initial list of key issues and related evaluation questions will be big. To establish a final list of
evaluation questions, it is good to have a close look to the key themes to be evaluated and to identify
external factors that could influence the outcome of the project programme. External factors that
cannot be influenced do not have a place in the priority questions during the evaluation process. This
will help to have targeted and priority questions.

Improve

Establishing priority questions


Probability of answers being
used for decision-making

Stakeholders’ interests
List of themes
Decisions to be taken Priority questions
and questions
Political context

Probability of being able


to answer the questions

Source: Adapted from EC 1999

40
Some samples of evaluation questions to be included in the ToR

Relevance, quality of design:


At the time of approval, was the project or programme relevant to the context of Sudan?
How do stakeholders assess the level of investments made by the programme?
Was the original Log frame appropriate for the design of the project in addressing primary needs of
the community?
Efficiency
Were human and financial resources on time, on target and within budget?
Were the modalities (procurement, implementation, administration) appropriate for the food delivery
to far-away communities?
Effectiveness
Have the planned results been achieved in time? Why or why not?
How appropriate is the geographical focus of the programme? What are the eventual problems?
Impact
What is the impact of the project in the broader poverty alleviation sector framework; for instance:
has income increased, more employment and income generating opportunities, savings on fossil fuel
use, etc?
What is the Impact of the project in the broader energy sector framework; is rural electrification as
relevant as urban electrification, is renewable energy as relevant as energy efficiency, is the policy
and regulatory framework sound enough to accompany the energy sector development?
What are the socio-economic impacts: school results, increase in handicraft production, more and
better telecommunication, health issues for mother and child?

Sustainability

What is the extent of ownership of the asset; in particular the transfer of the equipment to the relevant
entities at the end of the project, in most cases the public utilities?

What is the financial sustainability of the asset, and in particular the cost-recovery system put in
place, and its efficiency; in this regard, the ownership of the entity in charge of operation and
maintenance is crucial?
What is the sustainability of the installations in terms of built capacity; in terms of policies adopted,
human capacity trained, new institutional structures created, private sector participation, etc.?

The evaluation Terms of Reference

The technical part of the Terms of Reference should contain all the above mentioned elements
necessary to help the evaluator in their research and analysis, and also the evaluation questions
themselves. It shall therefore comprise:

- a description of the socio-economic, legal and institutional context;


- the scope of the evaluation (period to be covered, geographical zone to be covered, eventual
interactions with other interventions to be studied);
- existing information sources that can be used for the evaluation;

41
- the evaluation questions (descriptive, causal and normative), which should be limited in
number, clearly formulated and well targeted.

It is very important to have well defined evaluation questions. The following issues can be of help in
thinking about them

- who are the stakeholders (of the evaluation and of the evaluated intervention);
- which decisions have to be taken;
- what is the political context;
- what is the available budget and the timetable;
- what is the probability that the question can be answered;
- what is the probability that answers to the questions are used in the decision making process;
- which type of evaluation (ex ante, intermediary, ex post);
- whether the evaluation is formative (serving management and internal learning) or summative
(aiming at accountability).

In summary:

- The Terms of Reference explain to an evaluator what is expected from the evaluation and on
which information and other supports he/she can count;

- The evaluation criteria form the basis of any evaluation;

- Questions should be derived from the evaluation criteria and should be limited in number,
targeted, and prioritised within the Terms of Reference.

42
Outline for Terms of Reference for a Final review of EDF and STABEX
projects and programmes
Background

 Context

The project or programme to be evaluated

 Aims
 Instruments of intervention
 Funding
 Actions launched to date
 Previous evaluations, studies and reviews

The evaluation

 Scope
 Main evaluation questions
o Intervention logic
o Relevance and quality design
o Efficiency
o Effectiveness
o Impact
o Utility and sustainability

Methodology to be followed in data collection and analysis

The evaluation should be approached in six stages:

 Reconstruction of the intervention logic


 Basic data and information-gathering
 Structured survey
 In-depth interviews
 Case-studies
 Analysis and assessment

Management of the evaluation

Logistics, timing and budget

 Location
 Starting date
 Period of execution
 Work-plan and timetable
 Budget

43
Requirements

Personnel

Facilities to be provided by the Contractor

Reports

 Inception report
 Interim report
 Draft final report
 Technical annexes
 Final report

(Sample of reporting requirements) from ToR: Mid-Term Evaluation of Sudan EPA

Negotiations and Implementation Support (SENIS)

The language of the reports shall be English. The Expert shall:

- Submit an Inception Report and detailed work plan.

- Submit a Draft Final Report prior to a debriefing session

- Present at a debriefing session to stakeholders

- A Final Report upon receipt of comments from NAO, MoFT, SENIS PMU and ECD on the Draft
Final Report. The NAO, MoFT, SENIS PMU and ECD will have 20 days to provide additional
comments or approve the final report.

All reports will be sent in 5 copies, written in English, and provided in editable electronic form, as e-
mail attachments, and must be usable with computer software compatible to the main clients and
stakeholders. The final report including all attachments has to be provided on CD ROM in editable
form.

Guidelines

( Based on the Workshop Reviewing M&E Systems and Reporting Practices of EDF/STABEX supported
Programmes and projects in Sudan 19-21st December 2010, Findings and Recommendations)

44
Organisational needs for a new Results-based Monitoring and evaluation system in Sudan

As was discussed in the workshop and reflected in the recommendations, the system is based on four
pillars:

Ownership

o Identify the mandates of the specific institutions and assess the structures of the
institutions in order to conduct relevant capacity and competencies at Federal and
State levels

o Need to formulate a standard Log frame of institutions with objectives and indicators

o Strengthening line ministries at Federal and State levels

o Adapt conducive and agreed structures for M&E staff to share experiences from other
countries

o Target the highest levels of decision-making to make them understand the importance
of ownership, transparency and accountability and the role of Results-Oriented M&E
and Reporting in this and not just detecting mistakes.

Management

o Design a simple system that can be communicated easily to M&E Directorates and
units as agreed upon by development partners and the government.

o To work deliberately for harmonisation and coordination to have unified systems and
standards

o There is need to review information systems in line ministries so that information


systems are harmonised within line ministries

o There is need to agree on templates for M&E processes which should be distributed
for improving coordination and synergies.

Maintenance

Appropriate budgets and dedicated human resources should be included in work plans and
overall budgets of programmes, but this requires political commitment at the highest decision-
making

Credibility

There is need to adequately communicate the usefulness of Results based Management


(RBM) at Federal and State levels

o Conducting M&E workshops and sharing experiences like this particular workshop
did

o Conducting training for of state staff through technical support of capacity building
programmes

45
o Conduct seminars and workshops to increase awareness at all these levels

Blue print of Institutional arrangements for RB M&E SYSTEM in Sudan (State level)

Blue print of organisational arrangements for RB M&E SYSTEM in Sudan (Line ministry level)

46
Section 5: From Evaluation and Monitoring questions to
Indicators

System of Indicators

The use of indicators of progress, results and change are important for every systematic approach to
Results-Oriented Monitoring and Evaluation. In this chapter the use of appropriate indicators will be
explained.

Management wants the monitoring and evaluation system to be designed in such a way that changes
can be observed and comparisons can be made. In this respect, indicators have to be determined.
Objectively Verifiable Indicators (OVIs) describe the project’s objectives in operationally measurable
terms by specifying Quantity, Quality, Time and Place (QQTP). As earlier stated specifying OVIs
helps to check the feasibility of objectives and forms the basis of a project monitoring and evaluation
system. OVIs are formulated to answer the question “How would we know whether or not what has
been planned is actually happening or happened? How do we verify success?”

When designing a monitoring or evaluation system, the indicators are determined in order to answer
the management’s questions (and certainly not the other way round). This means that management has
to look for variables that can actually measure the phenomena of management and decision-makers
interest.

Indicators for monitoring

The information that is collected with the monitoring system can be divided into two main parts: the
information generated within the organisation - i.e. monitoring of action and of results, and the
information generated outside the organisation - i.e. the monitoring of reaction and context.

For the information that is gathered within the organisation, the phenomena of interest can often be
measured directly. The indicator takes the form of aggregated data. Such is the case with financial
figures, used materials, production levels etc. Measuring the target group reaction is altogether a
different matter. Direct measurement is often not possible because the target group is far too large.
Somehow an indirect indicator or an estimate has to found, based on a sample survey, or by
measuring a related phenomenon. For example, if we want to measure the increase in income in a
certain area, the change in expenditure on certain items like housing, food and health may be used as
an indirect indicator.

A good indirect indicator has to be:

 Sufficiently valid, i.e. have a causal relation with the management’s question (validity);
 Measurable with an acceptable degree of accuracy (quantitative, objective);
 Sensitive enough to changes of the phenomena of interest (sensitivity);
 Simple (the data must be easy to collect) and as efficient as possible regarding the costs (cost
effective).

Indirect indicators should be used with care. Management have to be aware of the limitations in scope
and validity of the indicators used in the system. Management is interested in getting answers to the
management’s questions, for which the indicators are a means. In the table below a classification of
some types of indicators is presented for three different types of development interventions, which
may provide an overview of the kind of information that is usually collected.

47
A classification of indicators according to intervention type
Monitoring type Supply/Product delivery Service delivery Infrastructure
construction
* Resources used * Resources used * Resources used
Monitoring of
* No. of activities realised
action
* Frequency of contact
* Product quality * Coverage of service network * Completion rate
Monitoring of
* Quantity distributed * Respect for time scheme * Timeliness
result
* Quality of delivery * Quality of the works
Marketing: Beneficiary contact: Use:
* Appreciation of product * Adoption rates * Rate of use
Monitoring of
* Utilisation of product * Satisfaction level * Users satisfaction
reaction (purpose
* Levels of production * Maintenance
level)
* Administration
* Contribution of Users
* Competitive position * Client environment * Distribution of benefits
* Market fluctuation * Economical setting * Public-administration
Monitoring of * Economic policy * Institutional setting policy
context (overall * Inflation rate * Climate * Institutional setting
objective level) * Labour market * Etc. * Economical setting
* Political stability * Etc.
* Etc.

Define the information flow

Once project management has clearly formulated monitoring questions and decided upon variables to
be used for measuring what management wants to know, it becomes possible to organise the flow of
information. The information flow runs from the collection of data up to the moment the document
has been written and has been sent to the persons who need it for decision-making. The table below
indicates how to organise the flow of information based on a list of information needs for monitoring
and what to do at each level of the agreed LFM.

Monitoring information needs What to do


Data and information to be Make a complete list of all data to be collected for the
collected indicator in the Log frame Matrix
Specify data to be collected from within the project
organisation actions and results monitoring, and data to be
Where will the data and
collected outside the project organisation with the target
information be collected from
group or other information holding agencies. Investigate
what information is readily available with other agencies.
Data to be found within the project organisation can often be
collected through direct observation and interviews and
entered into the project information systems.
Method of data collection and by
Data to be collected from the target group may limit direct
whom
observation. Therefore other methods such as participatory
group interviews, informal surveys, etc have to be
considered.
Information could be presented orally, in writing or with
How to be informed visual aids. Avoid excessive reliance on formal long reports
when simple forms or tables can be used to be informed.
When to be informed Ensure that information comes in time to support decision-
making. When decisions are already taken, even the best
information becomes useless. Therefore timing is essential

48
to steer and control processes.

The concept of indicators in the evaluation context


Definition
In the evaluation context, the most important indicators are linked to criteria defining the success of
the project or programme. An indicator can measure a fact or an opinion. The measurement given is
usually approximate. An indicator can be developed specifically for an evaluation (ad hoc indicator),
or it can be taken from a monitoring system or from statistical data.
Indicator typology
Indicators can be classified:

 According to the handling of the information: elementary, derived, compound indicators;

 According to the comparability of the information: specific or generic indicators, key indicators;

 According to the scope of information: indicators of context or programme;

 According to the stages of programme completion: indicators of resources, implementation, results,


and impacts;

 According to the evaluation criteria: indicators of relevance, effectiveness, efficiency, performance;

 According to the quantification method and the use of the information: monitoring and evaluation
indicators.

The system of indicators for EDF and STABEX projects and programmes
In general, indicators are associated with the different levels of a programme’s objectives.

49
Linking programming and
evaluation
Global
Impacts
objectives

Specific
Results
Effects objectives

Operational
Outputs
objectives

Inputs Objectives
Implementation

39
Source: Adapted from EC 1997

There are operational, specific and general indicators.

For the programmes financed under EC Funds, a three-tiered structuring approach is normally
adopted:

 The overall level of the programme with which the overall objective is associated. This level
comprises priorities which break down the overall objective into its main strategic
dimensions;
 The measures level, which corresponds to the basic unit in programme management, with
each measure subject to a specific management tool;
 The project level, which is the implementation unit of the programme.

What makes a good indicator for monitoring and evaluation?

Indicators meet several main criteria:


Single link: they are directly deduced from the strategic objectives to which they relate and can be
used to measure the degree to which these are accomplished.
Relevance: of all the indicators which can be used to measure the achievement of the objective in all
its dimensions, only the most relevant must be retained so as to limit the parameters which must be
followed and the information-collection efforts which must made.
Clarity: indicators must be comprehensible and easy to define. Definitions of indicators must be
short, explicit and unambiguous.
Consistency: the chosen indicators must be able to stand the test of time and serve as a solid base for
evaluation. The data must be compared and monitored over time.

50
Effectiveness: the measurement of indicators must not require too much energy or time, or too many
resources. The system must remain economically effective. In this field, “better” is the enemy of
“good". The most refined indicators are often those for which data provision is the most difficult and
costly. The purpose of the indicator is not to perfectly describe a situation but rather to provide a
stakeholder/decision-maker with a relevant indication.
Traceability: it must be possible to measure the values of the retained indicators at regular intervals.
If the lapse of time between two measurements is too long, the evaluation will not be useful as a
decision-making tool, and will instead prove to be stilted and useless.
Sensitivity: it is essential for the programme’s influence on the objective to be reflected in the
indicator. The indicator must therefore prove to be sensitive to the intervention action. At regular
intervals, the efforts made will be reflected in positive or negative movements in indicator value.

What makes a good indicator?


7 main criteria

Traceability
Sensitivity Effective
Clarity
ness

Relevance Consistency Univocal link

Relevant
Accepted (by the stakeholders, etc.)
Credible (for non-experts, easy to interpret)
Easy to monitor (low cost)
Robust against manipulation 50

From evaluation questions to indicators

It is most difficult to link your evaluation question to the analysis of chosen indicators of progress,
result and change. The last part of the evaluation process after data analysis is the judgement to
answer the different evaluation questions. That is why evaluation questions have to be linked to
indicators.

51
To evaluate, monitoring and context
indicators are necessary
Evaluation

Monitoring

Specific
Inputs Outputs Results
impacts
Monitoring indicators Overall
impacts

Specific
Outputs Results
impacts
Context indicators
55

Indicators linked to the evaluation


dimensions
Elementary indicators Derived indicators

Number of places in training


to be covered
Relevance
Number of places in training
to meet demand

Budget absorbed
Efficiency
New businesses created

Number of trainees entering


the labour market
Effectiveness
Expected level 62

Example of an evaluation question?

To what extent has EC support improved the capacity of the educational system to enrol

52
pupils from disadvantaged groups without discrimination?

The judgement criterion (also called reasoned assessment criterion) specifies an aspect of the
evaluated intervention that will allow its merits or worth to be assessed in order to answer the
evaluation question, For instance:

Judgement criterion derived from the question

Capacity of the primary school system to enrol pupils from ethnic minority X with
satisfactory quality.
The judgement criterion gives a clear indication of what is positive or negative, for example:
"enhancing the expected effects" is preferable to "taking potential effects into account".
The question is drafted in a non-technical way with wording that is easily understood by all, even if it
lacks precision.

The judgement criterion focuses the question on the most essential points for the judgement.

Yet the judgement criterion does not need to be totally precise. In the first example the term
"satisfactory quality" can be specified elsewhere (at the indicator stage).
It is often possible to define many judgement criteria for the same question, but this would complicate
the data collection and make the answer less clear.

In the example below, the question is treated with three judgement criteria (multicriteria approach):

 "capacity of the primary school system to enrol pupils from ethnic minority X with
satisfactory quality"
 "capacity of the primary school system to enrol pupils from the poorest urban areas with
satisfactory quality"
 "Capacity of the primary school system to enrol girls ".

A judgement criterion corresponding to the question


The judgement criterion should not betray the question. In the following example, two judgement
criteria are considered for answering the same question:

 "capacity of the primary school system to enrol pupils from ethnic minority X with
satisfactory quality"
 "primary school leavers from ethnic minority X pass their final year exam "

The first judgement criterion is faithful to the question, while the second is less so in so far as it
concerns the success in primary education, whereas the question concerns only the access to it. The
question may have been badly worded, in which case it may be amended if there is still time.

From judgement criteria to indicators

An indicator describes in detail the information required to answer the question according to the
judgement criterion chosen, for example:

Indicator derived from the judgement criterion

53
 Number of qualified and experienced teachers per 1000 children of primary-school age in areas
where ethnic minority X concentrates

Not too many indicators


It is possible to define many indicators for the same judgement criterion. Relying upon several
indicators allows for cross-checking and strengthens the evidence base on which the question is
answered. However, an excessive number of indicators involve a heavy data collection workload
without necessarily improving the soundness of the answer to the question.

In the examples below three indicators are applied to a judgement criterion ("capacity of the primary
school system to enrol pupils from ethnic minority X with satisfactory quality"):

 "Number of qualified and experienced teachers per 1000 children of primary-school age in
areas where ethnic minority X concentrates"
 "Number of pupils per teacher in areas where ethnic minority X concentrates"
 "Level of quality of the premises (scale 1 to 3) assigned to primary education in areas where
ethnic minority X concentrates ".

54
Section 6: Results-Oriented Monitoring (ROM) for EC
external assistance (projects and programmes)

In this section the manual explains the EC ROM system, that has to be seen as an external outsourced
ROM system, not to be confused with the internal Results-Oriented Monitoring. ROM was coined by
the EC services some 15 years ago, and by using the term ROM in this section, it means the EC ROM
system.
There is an intense discussion about the effectiveness of development cooperation. The growing
demand for development efficiency is largely based on the realisation that achieving good ‘products’
is not enough. Today the question is not ‘What have we done?’, but: ‘What have we achieved in terms
of results?’. Managing for results emerged as a ‘new’ approach which does more than attempt to focus
on ‘results’ and to align development cooperation mechanisms. Management for results also tries to
structure the somewhat confusing terms used by the international development community when
dealing with results.

The ROM system for EC External assistance ( external monitoring)

The European Community presently commits billions of Euro per annum to external assistance
programmes. The primary objectives of the Community’s assistance programmes are to
reduce poverty, strengthen democracy, human rights and gender equality, support integration
into the global economy, maintain peace and stability and facilitate socially and
environmentally sustainable economic development.

Within the framework of these wider values and in line with the Millennium Development Goals, the
Community has identified six areas where it believes it is able to add genuine value and where its
interventions complement and reinforce the efforts of other bilateral and multilateral donors: the link
between trade and development; regional integration and co-operation; support for macro-economic
policies and equitable access to social services; transport; food security and rural development; and
institutional capacity building related particularly to good governance and the rule of law.

Evolution of EC ROM

The EC has been opting for a long time for a results-oriented external co-operation, notably including
results/progress monitoring, project performance enhancement and quality assurance of the
operations. ROM covers a well defined part of this strategic results management, with systematic
assessments during the project life cycle as well as ex post.

ROM supports the ambitious efforts of the Paris Declaration to improve aid practices and
effectiveness, designed to help developing countries achieve the Millennium Development Goals
(MDGs) by 2015. The European Consensus has made these commitments more concrete for the EC
and all of the EU Member States. ROM criteria and sub-criteria already address the key thematic
issues (human rights, gender equality, democracy, good governance, children’s’ rights, indigenous
people, conflict prevention, environmental sustainability and HIV/AIDS).

55
Simultaneously, the EC ROM system has been expanded:

 in depth, by focusing more on the direct usefulness of the wealth of information on


project/programmes in the field, to have performance reports and diagnostic analysis;

 In length of the project cycle, by having external monitoring during the project life cycle, but
also ex post.

 by differentiation of the methodology, by using a specific ROM methodology to monitor


progress in the EC funded Sector Policy Support Programme (SPSP) and for ex post ROM

What is EC ROM?

ROM is the regular review (second opinion) of how a project is progressing in terms of resources use,
implementation and delivery of results in order to help the project management achieve final
objectives.

The major current monitoring systems focus on activities and outputs (e.g. training of primary school
teachers), ROM additionally focuses on results including outcomes (e.g. the number of children, who
are taught by these teachers) and the quality of the teaching provided and impact (e.g. the increased
number of children, especially girls). Also this external monitoring is unlike other internal monitoring
done by those directly involved in a project or programme, ROM clearly separates the management
and the monitoring function, ROM is done by independant experts who thus can have a more
objective view of the project’s performance. For the past ten years, every three years ROM missions
are contracted on a regional basis. More then 10000 Monitoring reports (on 6000 projects since 2000)
are in the database and extensive qualitative and sectoral analyses have been made. In the diagram
below you see an evolution of performance by sector.

100%
80%
60%
40%
Very good (a)
20% Good (b)
0% Problems (c)
Serious deficiencies (d)
N/A

Source: Participe ROM coordination reports

56
Conventional EC ROM of projects/programmes

Conventional ROM is based on the methodology as described in the Monitoring Handbook and
applies to all projects, programmes, regional programmes and thematic budget lines, whether
managed by the ECHQ or EUDs. As the methodology is well established, it will not be described at
length here.

The Log frame is the foundational tool for monitoring standard projects/programmes. However, the
use of the Log frame as an effective tool for EC ROM does depend on a number of factors, including:
 The existence of a Log frame: Most of the projects/programmes are now designed with the
use of the Log frame approach, and this situation has dramatically improved in the past years.
 The quality of the Log frame: How, and by whom, was the Log frame constructed? Were the
primary stakeholders involved? Do the risks and assumptions adequately address their interests?
Do the indicators fulfil the accepted standards for identifying Objectively Verifiable Indicators
(OVIs) in terms of quantity, quality and time factors?
 The continuing relevance of the Log frame: Has the Log frame been updated? Does it
reflect the changes that may have occurred since implementation commenced? There is a need
for the EC to address the continuing hesitation to update the originally designed Log frame of
the Financing Agreement in subsequent working documents, such as Programme Estimates.

Some features of a ROM mission in ACP countries

Writing the Monitoring Reports and the on-line response sheet

Documents leading to the Monitoring Reports (MRs)

 Monitor’s personal notes

- No specific format

- Should be legible and clear

- Used in the event when MRs are criticised or questioned

 Background conclusion sheet ( BCS)

- Must be completed by the monitors

- Specific format (main criteria with related sub-criteria)

- Grade the issues (a = very good, b= good, c = problems, d= serious


deficiencies)

Allows making judgements based on written comments

57
The on-line response sheet

58
Responsibility for reporting and follow up

Overview of Results-Oriented Monitoring systems of other development agencies

Feedback to Project Portfolio


Management Lessons Learned Performance Accounta
Overview

EC X X X

DFID X X

US-DOS X X

USAID X X X X

WB X X X

SIDA X

BTC X X

GTZ X X X X

59
AECID X

LuxD X X
AFD X
NL-MFA X X X
Source: Particip ROM Coordination reports

Weaknesses of different ROM systems, used by Development agencies:

• Nonexistence/low quality of indicators; monitoring less precise and objective (EC, DFID UK,
LD, NMFA),

• Not all projects assessed (EC),

• “Learning” captured by external monitors and not “in house” (EC),

• Alignment with PG monitoring systems should be enhanced (LD),

• Improving link between on-going implementation and design phase (AFD),

• Result chains used not always detailed (NMFA),

• Need of indicators for process-oriented interventions such as the policy dialogue or


institutional strengthening. Results-based monitoring of contributions to multilateral
organisations, international agencies and civil society organisations remains difficult
(NMFA).

Guidelines

( Based on the Workshop Reviewing M&E Systems and Reporting Practices of EDF/STABEX supported
Programmes and projects in Sudan 19-21st December 2010, Findings and Recommendations)

60
Difference between implementation monitoring and results monitoring?

The manual is concerned about results monitoring and evaluation and its reporting. The Workshop
results showed that most Line ministries are concerned about implementation monitoring and its
status reporting to the hierarchy of Project steering committees. Some study reports on the current
status of foreign aid disbursed and delivered to Sudan for Recovery and Development (2005-2009)
are regularly prepared to show the financial monitoring of programmes and projects.

The following elements are key features in Implementation monitoring of projects and programmes.

 Description of the problem or situation before the intervention


 Benchmarks for activities and immediate outputs
 Data collection on inputs, activities and immediate outputs
 Systematic reporting on provision of inputs
 Systematic reporting on production of outputs
 Designed to provide information on administrative, implementation and management issues
as opposed to a broader development effectiveness issues

The following elements are necessary for results monitoring for a range of interventions and
strategies:

 Baseline data to describe the problem or situation before the intervention ( Section 3)
 Indicators of outcomes (Section 4)
 Data collection on outputs and how and whether they contribute toward achievement of
outcomes
 More perceptions of change among stakeholders (Section 3)
 Systemic reporting with more qualitative and quantitative information on the progress toward
outcomes

Reporting in Implementation monitoring

The reports have been described for the ROM and the evaluation; they are standard EC formats that
can be easily adjusted to Sudan administrative requirements. For Internal monitoring is the Internal
monitoring Sheets as used by the EUD (See Sample).

For implementation monitoring and reporting to hierarchy and Steering committees are the following
reports used. There is no standard format available for all Sudan administration and standardisation of
formats have been recommended by the participants of the workshop in order to harmonise the M&E
and reporting and practices.

These are the reports used by SIFSIA:

 All reports have been simplified focusing on outputs and outcome, more results based than
activity focused. The reports are less narrative and more quantitative in approach.
 There are quarterly and annual reports addressed to the Steering Committee and donors. Later
the quarterly report became 6 monthly reports.
 Sometimes Status reports or Overall reports are asked by the Steering Committee.

61
 Quarterly Progress reports or six monthly reports are prepared by the PSU. They give details
about project implementation and achievements, they inform also about Project performance
and level of satisfaction of project stakeholders.
 Annual reports are prepared by the PSU for the Steering Committee and gives information on
the financial monitoring of the disbursements of the funds and their use; a review is
undertaken yearly to assess progress made with respect to the annual work plan.

Sample of an Internal Monitoring Sheet used by the task manager in EU Delegations

Project Monitoring Sheet


Project Nr.: BHU/02/002 - Award ID 00011509 & BHU/02/003 – Award ID 00011510

Project Title: Rural Enterprises Development Programme

Project Document: Available

Status: Ongoing

Start Date/End Date: 08.2002 – 07.2007

Project Budget Total:  2,049,497 USD under BHU/02/002/MoA (Award ID 00011509)


 1,161,191 USD under BHU/02/003/MTI (Award ID 00011510)
Planned total: 4,192,000 USD

Funding Source: TRAC

Executing Agency: Ministry of Agriculture (MoA) &

Ministry of Trade and Industry (MTI)

Steering Committee: Project Management Committee (bi-monthly)

National Project Manager: 



UNDP PA/Specialist:

Project Site:

Project Oversight and Monitoring


Progress Reports (quarterly) by CO in cooperation with SNV

Progress Report System:

Work Plan: Annual work plan and budget for MTI 2002

62
Annual work plan and budget for MoA 2002

Work plan for MTI 2003

Work plan for MoA 2003

Work plan for MoA 2004

Work plan for MTI 2004

Progress Reports: Minutes of the RED PAC meeting held on 26 June 2002

Minutes of the RED project meeting held on 20 December 2002

Minutes of the RED project meeting held on 27 December 2002

Quarterly Progress Report for MoA, Q4 2002

Quarterly Progress Report for MoA, Q1 2003

Quarterly Progress Report for MoA, Q2, 2003

Quarterly Progress Report for MoA, Q3, 2003

Quarterly Progress Report for MoA, Q4, 2003

Quarterly Progress Report for MoA, Q1, 2004

Quarterly Progress Report for MTI, Q2 2003

Quarterly Progress Report for MTI, Q3, 2003

Quarterly Progress Report for MTI, Q4, 2003

Quarterly Delivery Rates Tracking

Key Documents

Project Fact Sheet: Project fact sheet

63
Logical Framework: Log frame

Indicator Tracking: Indicator tracking

Evaluation Reports: M&E Plan (update 03/2004)

Other Reports:  RED people Mapping


 Tracking sheet for Technical Assistance (update 01/2004)


 RED Marketing - Consolidated Progress Report and Future Action
 RED Marketing – Japan Market Report for Handicraft Items & Art
work
 RED Marketing – Research report for Bhutan Crafts in the UK

Field Visits: None

Media reports:

64
Section 7: Country Programme Evaluation
Bilateral and Multilateral Country Programmes

Country Programme Evaluation does have the same approach as the other types of evaluations. The
systematic approach, methods and tools, judgment criteria and evaluation questions are used in the
same way, as described in the other chapters.

The difference is the scope and the use of rather strategic objectives. Also the systematic impact of
synergies between multilateral and bilateral cooperation are important. Country Programmes designed
and implemented by different development agencies in a country can have synergies but also adverse
effects.

The key comparative advantages of the multilateral institutions include their unique cross country
exposure, their close relationship with government institutions in partner countries and their role in
the international harmonisation process. Strengths of the bilateral cooperation include strong
relationships with civil society and private sector actors and flexibility to introduce innovative
approaches. Synergies between these two forms of cooperation contribute to a better achievement of
development objectives through mutual learning and exchange of experiences. Humanitarian Aid
banks a lot on bilateral and multilateral activities and their synergies and is therefore included.

A recent Country Programme Evaluation was performed by DFID for Sudan in March 2010. (see
http://www.dfid.gov.uk/Media-Room/Publications/Evaluation-studies/ )

65
Government
(Country X)

Strategic positioning according to other donor


activities Influence by sharing of different
approaches and experiences

Swiss cooperation Multilateral institutions


activities activities

Strategic positioning according to multilateral strategies


Influence by sharing of different approaches and
experiences
Swiss cooperation Multilateral institutions
Strategies Strategies

Shareholding, provision of
financial resources, etc.
Swiss cooperation Multilateral institutions

Schematic presentation of Swiss Development aid in relation to multilateral institutions

Why a Country Programme Evaluation? – Rationale

Recent international developments, including the Monterrey consensus and the Paris Declaration,
emphasise the partner government leadership role as well as harmonisation and collaboration among
development partners as a condition for increased aid effectiveness. Harmonisation and coordination
at country level are characterised by interaction between a multitude of actors, including other
bilateral donors, civil society and private sector actors, with the government institutions leading the
process. Within these multiple interactions, the relation between multilateral and bilateral cooperation
has unique characteristics that require particular attention.

Country Programme evaluation: the case of 10th EDF for Burundi

10th EDF, allocations for Burundi

Allocated
Type Sectors amount
(M €)

66
Rehabilitation and rural development 52

Health 25

General budget support 90

Envelope ‘A’ Non-focal sectors: 21

- Good governance,
- Support to the National Authorising Officer
(NAO),
- Technical facility of cooperation,
- Support to non-state actors,
- Support to the Economic Partnership Agreement
Envelope ‘B’ Funds to cover emergencies and unforeseen needs 24.1

Total 212.1

Source: Republic of Burundi – EU: Country strategy and indicative national programme, 2008-2013.

Analysis of the intervention logic

The starting point of the evaluation should be the analysis of the underlying logic of the EC's
development cooperation. A multi-level analysis will be needed, as EC’s objectives have evolved
over time, responding to the changing environment and needs, and also because the logic is set out in
a variety of documents, thus needs to be collated into a coherent framework. In its simplest form, a
logic model describes the theory and design of an intervention, how the intervention's activities and
outputs derive from objectives and influence stakeholders and/or beneficiaries leading to the
achievement of the intended outcomes in the short-, medium- and longer-term. In the logic model the
key links from the activity to the long-term objectives are set out, illustrating a "results chain", thus
identifying key relationships and enabling the identification of performance indicators along the
chain.

The following diagram represents the connections between the hierarchy of objectives and the chain
of results.

67
68
Intervention logic, 9th EDF 2003-2007, faithful effects diagram Intervention logic, 9 th EDF 2003-
2007, faithful effects diagram

Evaluation questions for the Country Programme evaluation

The objective of the evaluation is to assess the performance of the EC’s past and current assistance to
Burundi, to assess the achievements of the EC’s development cooperation. The evaluation will
identify conclusions and key lessons, which can be drawn from past operations. The evaluation will
provide the EC’s policy makers and task managers with recommendations useful for the
implementation of the ongoing CSP and the Annual Programmes, as well as for future programming.

69
The evaluation examines the relevance and coherence between programming and implementation of
the EC National Indicative Programmes (NIP) for 2001-2007 and 2008-2013. The evaluation covers
the consistency check of the NIP with Burundi’s Poverty Reduction Strategy.

The following evaluation questions have been developed:

Evaluation questions (sample)

# Topic Question Approach


1 Coherence of To what extent did the It can be assessed if EC programming is
development design of the EC aid consistent with Burundi’s main national policies
strategy strategy take due in various sectors.
account of the
Burundian strategic
priorities?

2 Combating To what extent and in According to the CSP 2003-2007, between 1990
poverty which way has the EC and 2001 the poverty rate in Burundi increased
contribution helped from 40% to 69%. One of the overarching aims
Burundi to make of the EC’s development cooperation is to fight
progress towards against poverty. Funds were allocated to a
poverty reduction? broad range of sectors, and various instruments
were used to support Burundian people in need.

The question can be assessed through analysis


of statistical data about social, economic
performance of Burundi (for example: changes
in per capita GDP, changes in health indicators,
etc)

3 Coordination To what extent does the EC is one of the largest donors in Burundi,
with donors EC coordinate with while several member states provide bilateral
other donors to ensure support to the country. Coordination and
better delivery of harmonisation of the aid can enhance efficiency,
services? and promote synergies among donor actions.

The question can be examined through analysis


of co-financing of projects, as well as planning,
monitoring mechanisms

4 Relevance of Was it relevant that EC Burundi was seriously hit by the civil war, and
EC support to supported post-crisis there is still much to be done in the field of
post-crisis rehabilitation? rehabilitation. Programmes related to rural
rehabilitation development, one of the focal sectors, targeted
the improvement of the social, physical

70
# Topic Question Approach
conditions of the rural areas.

The question can be assessed by using EC and


other donors’ data.

5 Effectiveness What are the achieved Summary of results, programme outcomes.


of EC results of the EC
assistance assistance? Achievement of the indicators of the budget
support.

6 Efficiency of What is the efficiency Financial efficiency of EC sector support, as


EC aid of the EC support? well as efficiency in programme management
and contracting can be examined.

It needs to be assessed how Burundi’s readiness


for general budget support was measured.

7 General Is GBS a more effective General budget Support has an increasing share
Budget and efficient aid in 10th EDF compared to the previous EDF
Support instrument than SBS cycles. Main conclusions, lessons about the
programme support? use, advantages, and disadvantages of GBS can
be made.

Throughout the GBS cycle of operations, the


following areas can be assessed:

o Programming phase: National policy


and strategy, Macroeconomic
framework, Public financial
management,
o Identification phase: donor
coordination, performance
measurement,
o Formulation phase: several areas are
covered
o Implementation phase: capacity
development
8 Good To what extent has EC Good governance programmes can contribute to
governance support contributed to the reinforcement of democratic institutions, the
enhance Burundi’s peace process, and to the strengthening of the
capacity to strengthen capacities of the public administration.
justice and the rule of
law?

9 Healthcare How and in what way Burundi struggle to improve its health system,
has the EC assistance while the population is growing rapidly and
contributed to tackle state resources are very limited. Health became
health challenges of the a focal sector in the 10th EDF. Various health
country? sub-sectors can be analysed under this question.

71
# Topic Question Approach
10 Macro- What are the main It can be examined if EC aid has any effect on
economic outcomes of the EC debt management, inflation, balance of
support assistance at a payments, etc.
macroeconomic level?

11 Synergies What are possible The systematic use of synergies between


synergies between EC multilateral and bilateral cooperation is one of
assistance and other the strategic objectives for effective aid. The
multilateral and key comparative advantages of the multilateral
bilateral cooperation? institutions include their unique cross country
exposure, their close relationship with
government institutions in partner countries and
their role in the international harmonisation
process. Strengths of the bilateral cooperation
include strong relations with civil society and
private sector actors and flexibility to introduce
innovative approaches. Synergies between these
two forms of cooperation contribute to a better
achievement of development objectives through
mutual learning and exchange of experiences.

Judgment criteria and indicators


Indicator measure the objective to be attained, resources used, impacts obtained. Indicators provide
quantified information for the users of an evaluation. Indicators are often related to success criteria of
the intervention. Indicators can measure an opinion or a fact. They can be designed especially for a
specific evaluation or be taken from an appropriate monitoring system. Three questions are selected
and complemented with appropriate judgment criteria and indicators.

Topic Question Judgment criterion Indicators


Macro- What are the main Macroeconomic support Measuring changes in
economic outcomes of the EC enhanced the effectiveness rate of inflation, public
support assistance at a and efficiency of the aid debt, balance of
macroeconomic level? provided to Burundi. payments, per capita
GDP, etc.

Coordination To what extent does the EC has added value in policy Measuring of how
with donors EC coordinate with dialogue in areas that have a much the specific
other donors to ensure strong rationale in terms of coordination instances
better delivery of poverty reduction were connected to the
services? paths towards poverty
reduction

Coherence of To what extent did the Objectives of CSP 2003- Degree of alignment of
development design of the EC aid 2007 and 2008-2013 are CSP objectives with
strategy strategy take due coherent with Burundian needs and priorities of

72
Topic Question Judgment criterion Indicators
account of the strategic priorities Burundi as stated
Burundian strategic
priorities?

Sample of ToR for EC Country programmes evaluation (EC format for Reporting) see at
ec.europa.eu/europeaid/evaluation/methodology/

73
Glossary of Monitoring and Evaluation Terms
Accountability

Obligation for a manager of resources to demonstrate that work has been conducted in compliance
with the established plans, budgets, rules and standards and to report fairly and accurately on
performance results. It includes responsibility for the justification of expenditures, decisions or results
of the discharge of authority and official duties, including duties delegated to a subordinate unit or
individual. The effective discharge of accountability is predicated on clearly defined responsibilities,
performance expectations, limits of authority, and clarity on how the exercise of responsibility and
authority will be monitored and assessed. One of the main functions of monitoring and evaluation is
to contribute to strengthening accountability by providing objective information on the veracity of a
manager’s reporting.

Activity

Action taken or work performed to transform inputs into outputs.

Appraisal

An overall assessment of the relevance, feasibility and potential sustainability of a project or other
operational exercise. It is an assessment of the overall soundness of the project and a justification for
its implementation. Criteria commonly include relevance and sustainability. An appraisal may also
relate to the examination of opinions as part of the process for selecting which project to fund. The
purpose of appraisal is to enable decision-makers to decide whether the activity is in accordance with
mandates and represents an appropriate use of resources.

Assumption

Hypothesis about risks, influences, external factors or conditions that could affect the progress or
success of a project or a programme. Assumptions highlight external factors, which are important for
the success of project or programme, but are largely or completely beyond the control of management.

Audit

An exercise to determine if there is an adequate and effective system of internal controls for providing
reasonable assurance with respect to:

 Integrity of financial and operational information; compliance with regulations, rules, policies
and procedures in all operations; and safeguarding of assets;
 The economic and efficient use of resources in operations and identifying opportunities for
improvement in a dynamic and changing environment; and

 Effectiveness of programme management for achieving stated objectives consistent with


policies, plans and budgets.

Baseline

Data that describe the situation to be addressed by a project, programme or subprogramme and that
serve as the starting point for measuring performance. A baseline study would be the analysis
describing the situation prior to the commencement of the project or programme or the situation

74
following initial commencement of the project or programme to serve as a basis of comparison and
progress for future analyses. It is used to determine the accomplishments/results and serves as an
important reference for evaluation.

Benchmark

Reference point or standard against which performance or achievement can be assessed. A benchmark
often refers to an intermediate target to measure progress within a given period as well as to the
performance of other comparable organisational entities.

Beneficiary

The individual, group, or organisation, whether targeted or not, that benefit, directly or indirectly,
from the implementation of a project, programme or output.

Best practice

Planning, organisation, managerial and/or operational practices that have proven successful in
particular circumstances and which can have both specific and/universal applicability. Best practices
are used to demonstrate what works most effectively and to accumulate and apply knowledge about
how and why they work in different situations and contexts.

Bias

Anything that produces systematic error in an evaluation finding. Bias may result in over- or under-
estimating the object of evaluation or assessment.

Case study

The examination of the characteristics of a single case (such as an individual, an event, a programme
or some other discrete entity). A sample of multiple cases can also be examined to look for
commonalities and to identify patterns. Case studies are often used to gather qualitative information in
support of findings obtained through quantitative methods.

Causal relationship

A logical connection or cause-effect linkage ascribed to the relationship between


accomplishments/results and efforts to achieve them or between final results and their impact on the
target beneficiaries. Generally the term refers to reliably plausible linkages.

Conclusions

Conclusions present reasoned judgments based on a synthesis of empirical findings or factual


statements corresponding to specific circumstances. Conclusions point out the factors of success and
failure of the evaluated projects and programmes, with special attention paid to the intended and
unintended results and impacts, and more generally to any other strength or weakness. Conclusions
draw on data collection and analyses undertaken, through a transparent chain of arguments.

75
Content analysis

A systematic approach to analysing themes in audio, visual, electronic or print communication.


Selected material is reviewed and assessed on the basis of predetermined criteria (such as the
reflection of key messages, accuracy, prominence, and reference to sponsoring organisation).

Cost-benefit analysis

A specialised analysis which converts all costs and benefits to common monetary terms and then
assesses the ratio of results to inputs against other alternatives or against some established criteria of
cost-benefit performance. It often involves the comparison of investment and operating costs with the
direct and indirect benefits generated by the investment in a project or programme.

Cost-effectiveness

Comparison of the relative costs of achieving a given result or output by different means. It focuses
on the relation between the costs (inputs) and results produced by a project or programme. A
project/programme is more cost effective when it achieves its results at the lowest possible cost
compared with alternative projects with the same intended results.

Criteria

The standards used to determine whether or not a project or programme meets expectations.

Data collection method

The mode of collection to be used when gathering information and data on a given indicator of
achievement or evaluation. Collection methods include the review of records, surveys, interviews, or
content analysis.

Data source

The origin of the data or information collected. Data sources may include informal and official
records, individuals, documents, etc.

Description of results

Succinct statement based on the data collected on the performance measures at the indicator of
achievement level. It interprets and articulates such data in a results oriented language..

Effect

Intended or unintended change caused directly or indirectly by the delivery of an output, project or
programme.

Effectiveness

The extent to which a project or programme attains its objectives, expected accomplishments and
delivers planned outputs.

76
Efficiency

A measure of how well inputs (funds, expertise, time, etc.) are converted into outputs.

Evaluation

A process that seeks to determine as systematically and objectively as possible the relevance,
effectiveness and impact of an ongoing or completed project, programme or policy in the light of its
objectives and accomplishments. It encompasses their design, implementation and results with the
view to providing information that is credible and useful, enabling the incorporation of lessons
learned into both executive and legislative decision-making process. Evaluation is often undertaken
selectively to answer specific questions to guide decision-makers and/or programme managers, and to
provide information on whether underlying theories and assumptions used in programme development
were valid, what worked and what did not work and why.

Evaluation scope

A framework that establishes the focus of an evaluation in terms of questions to address, the issues to
be covered, and defines what will be analysed and what will not be analysed. The scope defines the
parameters of the evaluation and is presented in the “Terms of Reference”.

Evaluation team

Group of specialists responsible for the planning and conduct of an evaluation. An evaluation team
produces the evaluation report.

Evaluator

An individual involved in all stages of the evaluation process, from defining the Terms of Reference
and collecting and analysing data to developing findings and making recommendations. The evaluator
may also be involved in taking corrective action or making improvements.

Evidence

The information presented to support a finding or conclusion. Evidence should be sufficient,


competent and relevant. There are four types of evidence: observations (obtained through direct
observation of people or events); documentary (obtained from written information); analytical (based
on computations and comparisons); and self-reported (obtained through, for example, surveys).

Ex post evaluation

An assessment of the relevance, effectiveness and impact of a project ot programme that is carried out
some time after its completion. It may be undertaken directly after or long after completion. The
intention is to identify the factors of success or failure, to assess the sustainability of results and
impacts, and to draw conclusions that may inform other projects and programmes.

External evaluation

An evaluation performed by entities outside of the programme being evaluated. Generally, it is


intergovernmental organs that commission such evaluations and receive final reports on them. As a
rule, external evaluation of a project, programme or subprogramme is conducted by entities free of

77
control or influence by those responsible for the design and implementation of the project and
programmes.

Focus group

A group of individuals having some common interest or characteristics, brought together by a


moderator, who uses the group and its interaction as a way to gain information about a specific or
focused issue.

Formative evaluation

Sometimes known as interim evaluation, it is conducted during implementation phase of projects or


programmes to improve their performance. Formative evaluations may also be conducted for other
reasons such as compliance, legal requirements or as part of a larger evaluation initiative. It is
intended for managers and direct supporters of a project.

Goal

The higher-order aim to which a programme is intended to contribute: a statement of longer-term


intent.

Impact

The overall effect of accomplishing specific results. In some situations it comprises changes, whether
planned or unplanned, positive or negative, direct or indirect, primary and secondary that a Project or
programme helped to bring about. In others, it could also connote the maintenance of a current
condition, assuming that that condition is favourable. Impact is the longer-term or ultimate effect
attributable to a project or programme , in contrast with an expected accomplishment and output,
which are geared to the biennial timeframe.

Indicator

A measure, preferably numerical, of a variable that provides a reasonably simple and reliable basis for
assessing achievement, change or performance. A unit of information measured over time that can
help show changes in a specific condition.

Indicator of achievement

Used to measure the extent to which expected accomplishments have been achieved. Indicators
correspond to the expected accomplishment for which they are used to measure performance. One
expected accomplishment can have multiple indicators.

Indirect effect

The unplanned changes brought about as a result of implementing a project or a programme .

Internal evaluation

Evaluation that is managed and/or conducted by entities within the programmes being evaluated.
There are two types of internal evaluation, namely:

(1) Mandatory Internal Evaluation (Self-assessments)

78
(2) Discretionary Internal Evaluation (Self-evaluation)

Input

Personnel, finance, equipment, knowledge, information and other resources necessary for producing
the planned outputs and achieving expected accomplishments.

Lesson learned

Generalisation derived from evaluation experiences with projects, programmes or policies that is
applicable to a generic situation rather than to a specific circumstance and has the potential to improve
future actions. A lesson learned summarises knowledge at a point in time, while learning is an
ongoing process.

Logical framework

Management tool (also known as a Log frame) used to identify strategic elements of a project or
programme (objective, expected accomplishments, indicators of achievement, outputs and inputs) and
their causal relationships, as well as the assumptions and external factors that may influence success
and failure. It facilitates planning, implementation, monitoring and evaluation of a project or
programme .

Methodology

A set of analytical methods and techniques appropriate for evaluation of the particular activity. It
could also be aimed at collecting the best possible evidence needed to answer the evaluation issues
and analytic questions.

Monitoring

A periodic assessment by programme managers and by, of the progress in achieving the expected
accomplishments and delivering the final outputs in comparison with the commitments set out in the
programme budget.

Monitoring and Evaluation

The combination of monitoring and evaluation together provide the knowledge required for effective
project and programme management and for reporting and accountability responsibilities.

Objective

Description of an overall desired achievement involving a process of change and aimed at meeting
certain needs of identified end-users within a given period of time. A good objective meets the criteria
of being impact oriented, measurable, time limited, specific and practical. The objective is set at the
next higher level than the expected accomplishments.

Outcome

“Outcome” is used as a synonym of an accomplishment or a result.

79
Output

A final product or service delivered by a project or programme to end-users, such as reports,


publications, servicing of meetings, training, advisory, editorial, translation or security services,
which a programme is expected to produce in order to achieve its expected accomplishments and
objectives. Outputs may be grouped into broader categories.

Participatory evaluation

A broad term for the involvement of various stakeholders in evaluation. It involves the collective
examination and assessment of a project or subprogramme by the stakeholders (programme managers
and staff included) and solicits views of end-users and beneficiaries. Participatory evaluations involve
reflective, action-oriented assessments of performance and accomplishment which yield lessons
learned and instructive practices.

Performance

The degree to which a project or programme delivers results in accordance with stated objectives,
timely and effectively as assessed by specific criteria and standards.

Performance assessment

External assessment or self-assessment by programme units, comprising monitoring, reviews, end-of-


year reporting, end-of-project reporting, institutional assessments, and/or special studies.

Performance measurement

A system for the collection, interpretation of, and reporting for the purpose of objectively measuring
how well projects or programmes contribute to the achievement of expected accomplishments and
objectives and deliver outputs.

Performance monitoring

A continuous process of collecting and analysing data to compare how well a project, programme or
policy is being implemented against expected results.

Project

Planned activity or a set of planned, interrelated activities designed to achieve certain specific
objectives within a given budget, organisational structure and specified time period.

Project/Programme cycle management

A tool for understanding the tasks and management functions to be performed in the course of a
project or programme’s lifetime. This commonly includes the stages of identification, preparation,
appraisal, implementation/supervision, monitoring, evaluation, completion and lesson learning.

80
Project evaluation

Evaluation of an individual project designed to achieve specific objectives within specified resources,
in an adopted time span and following an established plan of action, often within the framework of a
broader programme. The basis of evaluation should be built into the project document.

Project document

A formal document covering a project, which sets out, inter alia, the needs, results, outputs, activities,
work plan, budget, pertinent background, supporting data and any special arrangements applicable to
the execution of the project in question. Once a project document is approved by signature, the project
represents a commitment of resources.

Qualitative data

Information that is not easily captured in numerical form (although qualitative data can be quantified).
Qualitative data typically consist of words and normally describe people's opinions, knowledge,
attitudes or behaviours.

Quantitative data

Information measured or measurable by, or concerned with, quantity and expressed in numerical
form. Quantitative data typically consists of numbers.

Recommendation

Proposal for action to be taken to enhance the design, allocation of resources, effectiveness, quality, or
efficiency of a project or a programme. Recommendations should be substantiated by evaluation
findings, linked to conclusions and include the parties responsible for implementing the recommended
actions.

Relevance

The extent to which an activity, expected accomplishment or strategy is pertinent or significant for
achieving the related objective and the extent to which the objective is significant to the problem
addressed.

Result

The measurable accomplishment/outcome (intended or unintended, positive or negative) of a project


or programme. In the Secretariat practice, “result” is synonymous with accomplishment and outcome.

Results-based management (RBM)

A management strategy by which the managerensures that its processes, outputs and services
contribute to the achievement of clearly stated expected accomplishments and objectives. It is focused
on achieving results and improving performance, integrating lessons learned into management
decisions and monitoring of and reporting on performance.

81
Self-monitoring

Ongoing assessment by the head of a department or office of the progress in achieving the expected
accomplishments and delivery of outputs.

Stakeholder

Agencies, organisations, groups or individuals who have a direct or indirect role and interest in the
objectives and implementation of a project or programme and its evaluation. In participatory
evaluation, stakeholders assume an increased role in the evaluation process as question-makers,
evaluation planners, data gatherers and problem solvers.

Summative evaluation

A study conducted by independent evaluators at the end of a project or programme to measure extent
to which anticipated results were achieved; ascertain the effectiveness and relevance of approaches
and strategies; indicate early signs of impact; and recommend what interventions to promote or
abandon. Summative or Terminal evaluation is intended to provide information about the merit and
worth of the project or programme.

Sustainability

The extent to which the impact of the project or programme will last after its termination; the
probability of continued long-term benefits.

 Target

A specified objective that indicates the number, timing and location of what is to be achieved.

Target group

The main beneficiaries of a project or programme that are expected to gain from the results of that
project or programme . They are closely related to its impact and relevance.

Terms of Reference

Written document presenting the purpose and scope of the evaluation or inspection, the methods to be
used, issues to be addressed, the resources, schedule, and reporting requirements.

Triangulation

The use of three or more methods to conduct an evaluation or substantiate as assessment. By


combining multiple data sources or methods evaluators seek to overcome the bias that comes from
single informants and single methods.

 Validation

The process of cross-checking to ensure that the data obtained from one monitoring and evaluation
method are confirmed by the data obtained from a different method.

82
 Work plan

A detailed document stating outputs to be delivered and activities to be carried out in a given time
period, how the activities will be carried out, and what progress towards expected accomplishments
will be achieved. It contains timeframes and responsibilities and is used as a monitoring and
accountability tool to ensure the effective implementation of the programme. The work plan is
designed according to the logical framework.

(Source: UN OIOS Monitoring,Evaluation and Consulting Division, MECD and Evaluation


methodology for European Commissions External Asssistance)

83
Bibliography:
Slides and other training materials in this manual have been taken from training manuals on
Monitoring and Evaluation, prepared for the EC services by Demos International, Participe
GmbH and Price Waterhouse Coopers Advisory Belgium.

COMMISSION EUROPEENNE, DG Budget, Evaluating EU activities, a practical guide for the


Commission services, 2003.
http://www.europa.eu.int/comm/budget/evaluation/pdf/pub_eval_activities_full_en.PDF
COMMISSION EUROPEENNE Collection Means: Evaluer les programmes socio-économiques (6
Volumes), 1999.
COMMISSION EUROPEENNE, DG Regio, The Guide, 2003: http://www.evalsed.info/
COMMISSION EUROPEENNE, Communication on Evaluation, SEC (2002) 1051.
COMMISSION EUROPEENNE, Communication from the President in agreement with Mr Kinnock
and Mrs Schreyer to the Commission; implementing activity based Management in the Commission
SEC (2001) 1197/6&7
COMMISSION EUROPEENNE, Communication on Impact Assessment, COM (2002) 276 final.
COMMISSION EUROPEENNE, DG Budget, Evaluation ex ante: un guide pratique afin de préparer
les propositions de programmes de dépenses de l'Union, 2001 :
http://www.europa.eu.int/comm/budget/evaluation/pdf/ex_ante_guide_fr.pdf
COMMISSION EUROPEENNE, DG Budget, Evaluer les programmes de dépense de l’Union, Un
guide Evaluation ex post et intermédiaire, 1997.
COMMISSION EUROPENNE, Communication to the Commission from Mme Grybauskaité, in
agreement with the President, Responding to strategic needs: reinforcing the use of evaluation,
SEC(2007)213
Règlement (CE) N°1260/1999 du Conseil du 21 Juin 1999 portant dispositions générales sur les fonds
structurels
L’utilisation des résultats des évaluations, Technopolis France and the Tavistock Institute, 2005
BARBUT Laurent & BOUNI Christophe, « évaluation des programmes structurels européens :
enseignements à partir de deux exemples contrastés », dans BASLE Maurice, Jérôme DUPUIS &
Sylviane LE GUYADER (eds), évaluation, action publique territoriale et collectivités, Paris,
l’Harmattan, 2002, pp. 37-60.
CERTU, L’évaluation des politiques publiques, la question des objectifs, compte-rendu de la journée
du 12 mars 2002 du réseau évaluation Cete/Certu, cahier n°4, ed. Certu, 2002.
OCDE, Guide des meilleures pratiques à suivre pour l’évaluation, Paris, OCDE, 1998.
WORLD BANK, Monitoring & Evaluation: some tools, methods and approaches, Washington DC,
2001.
 WORLD BANK, Key Performance Indicator Handbook, Washington DC, 2000

 Guidelines for systems of monitoring and evaluation for the Human Resources Initiative
EQUAL in the period 2000-2006 » DG Employment and Social affairs, July 2000

84
85
Sites of interest within the Commission

Other sites of interest

• www.educationcanada.ca : Société canadienne pour l’évaluation


• www.seval.ch : société Suisse pour l’évaluation
• www.eval.org : European evaluation society
• www.degeval.de Société allemande pour l’évaluation
• www.valuazioneitaliana.it :
• www.eva.org : American evaluation society
• www.sfe.asso.fr Société française pour l’évaluation
• www.evaluation.org.uk Société anglaise pour l’évaluation
• www.r-mevaluationsociety.dk Société d’évaluation de Ramboll Management A/S
• www.evalsed.info Evaluation des programmes socio économiques

86

You might also like