Results Based Monitoring and Evaluation
Results Based Monitoring and Evaluation
Prepared by:
Dr Luc V. Zwaenepoel
EDF Unit with the support of Particip LT-TA
Ministry of International Co-operation
The guidelines are prepared by The European Development Fund (EDF) Unit at the Ministry of
International Co-operation (MIC) with the Technical assistance financed by the EDF. The content
is the sole responsibility of the Unit and the Consultant. The content is inspired and in line with the
Methodology on Results-Oriented Monitoring (ROM) and the Evaluation Methodology for
European Commission s External Assistance and all related training materials.
1
2
Background and purpose of manual with guidelines
The European Development Fund (EDF) Unit at the Ministry of International Co-
operation (MIC) is mandated to provide capacity to EDF/STABEX projects and line
ministries in Project Cycle Management (PCM) to enable them effectively prepare,
manage and implement European Union (EU) and other development assistance
programmes/projects.
As the mid-term reviews and evaluation of EDF/STABEX projects all follow Results -
Oriented criteria in assessing the projects, the EDF Unit considers it important that
projects institute and operationalise Results-Oriented M&E systems at the national
and project levels. The guidelines can also be used as a training resource in
Monitoring and Evaluation and Reporting for line ministries, states and EDF/STABEX
funded projects
The manual will assist EDF/STABEX projects and line ministries at national and
state level to use updated Log frame matrix in designing M&E systems that can track
agreed indicators of achievements of project purpose and results.
3
ABBREVIATIONS
CV Curriculum Vitae
EC European Commission
EU European Union
IR Inception Report
LF Logical Framework
4
MTR Mid-Term Review
OLAS On Line Accounting System (from EC) [Obsolete has been replaced by CRIS]
OO Overall Objective
PE Programme Estimate
PO Programme Officer
SISFIA-N Sudan Institutional Capacity Programme: Food Security Information for Action
(North)
TA Technical Assistance
5
Table of contents
6
Overview and Background
Pre-amble
The manual is part of the guidelines to assist EDF and STABEX funded programmes and projects in
systematically designing, managing and implementing an M&E system and full reporting. The M&E
system and the reporting practices are embedded in the institutional framework of line ministries in
Sudan.
This manual is following clearly the EC ROM methodology and the evaluation methodology for
European Commission external assistance. The manual is focusing on the systematic approach of
Results-Oriented Monitoring and Evaluation, the planning and management of the process and the
tools and methods used.
When the term Results-Oriented Monitoring and Evaluation is used, there is focus on an assessment
of results-based management of projects and programmes. The introduction of a results-based
approach in project and programme management includes the improvement of management (aid)
effectiveness and accountability by defining realistic expected results (Log frame analysis),
monitoring progress towards the achievement of expected results, integrating lessons learned into
management and reporting on performance.
Definitions
Results-Oriented Monitoring or ROM is the systematic and continuous collecting, analysing and
making use of information for the purpose of management and decision making. Monitoring systems
provide information to the right people at the right moment to help make informed decisions. It
provides an early warning system which allows for timely and appropriate intervention if a project is
not adhering to plan.
Evaluation is a value judgment concerning a public intervention with reference to explicit norms and
criteria. It concentrates on needs, relevance, results and impacts. (See Glossary of monitoring and
evaluation terms)
Evaluation comprises control and monitoring. The latter type of activity, like performance audit,
comprises the study of the implementation process and direct effects, but Evaluation puts special
7
emphasis on the production of results and impacts and how these were obtained (effectiveness).
Evaluation also comprises questions of relevance, utility and sustainability.
Evaluation should be applied to all activities, financed by the EC, but in particular to those that are
directed to external assistance.
Audit is the verification of the legality of procedures and the regularity by which resources are used.
The concept covers traditional financial audit but increasingly also performance audit, the latter being
close to evaluation. Audit involves checking the legality of procedures and the regularity of resource
allocation. The focus is on identifying errors and malfunctions and judging according to criteria and
general standards which are known and specified in advance. This process also makes it possible to
compare different performances.
Reporting;
Monitoring and Evaluation are made against the logical framework (Log frame) of a project or
programme. A Log frame explains the logic of the intervention (inputs-outputs-results-objectives),
progress and results are measured by Objective Verifiable Indicators (OVI) over time. A ROM
exercise is focusing on needs (relevance, quality of design), inputs and outputs (efficiency) and
objectives (effectiveness). Results-Oriented Evaluations are more focusing on precise questions as aid
effectiveness, impact and sustainability. Results-Oriented Monitoring can only address the
potentiality of sustainability and the future impact of on-going projects and programmes.
8
The Logic of an Intervention
9
Column 2: This column outlines how the design will be monitored and evaluated by providing the
indicators used to measure whether or not various elements of the operation design have occurred as
planned.
Column 3: This column specifies the source(s) of information or the means of verification for
assessing the indicators.
Column 4: This column outlines the external assumptions and risks related to each level of the
internal design logic that is necessary for the next level up to occur.
10
Four main characteristics of the Monitoring and the Evaluation of projects
and programmes (EDF and STABEX financed)
1. Systematic approach
Monitoring and Evaluation can be internally or externally organised. The approach (planning,
implementation, communication and reporting of results and recommendations, follow up) is standard
and used for all EC financed projects and programmes. The systematic approach for EDF and
STABEX funded projects and programmes is therefore standard in most ACP countries. In fact so is
the ROM or the external monitoring, a ten year old system that is outsourced to independent
consultants. For internal monitoring there is a systematic six monthly reporting (Internal monitoring
sheet) by the task manager in the EU Delegation and Project Management Units.
The reporting structure for progress on projects and programmes are the well known Inception
reports, progress reports, ad hoc reports, final reports, mid-term review report, ROM reports, and
Final and ex post evaluation reports. All these types of reporting are described in the sections and in
the Glossary.
It is important to stress that the definition of terms like results, outputs, outcome, impact and potential
sustainability (durability) are well understood as it is consistently used in EC guidelines. Therefore
logical frameworks of programmes and projects and its periodic revision needs to focus on outputs
and outcomes beyond the immediate project activities and deliverables of external technical assistance
The framework for Monitoring and Evaluation is the use of the project Log frame analysis and the
link between 5 criteria and the content of the Log frame.
Efficiency : the question of whether effects have been reached at optimal (or, in the absence of a
frame of reference, reasonable) cost
11
3. System of indicators of results and outcome (all definitions of Monitoring and Evaluation
terms can be found in Glossary)
A second frame work is the system of indicators to measure progress and change. The use of a
baseline study at the start of the intervention can give Objective Verifiable Indicators to measure
progress and results over time. Indicators can be organised in different manners, namely as a function
of:
12
Relevance, effectiveness, efficiency, utility, sustainability
4. A last characteristic for Monitoring and Evaluation is the standard approach, methods and
the choice of tools. A section will explain in more detail methods and the toolbox, as this is a key
factor for line ministries and EDF and STABEX projects and programmes.
Some chapters will include Guidelines for a Results-Oriented Monitoring and Evaluation
system in Sudan, including the recommendations and results of the Workshop Reviewing M&E
Systems and Reporting Practices of EDF/STABEX supported programmes and projects in
Sudan 19-21st December 2010.
Why use the EC Manuals for monitoring and evaluation?
Most of the materials explained in this manual are prepared by the EC services and are standard
procedures for monitoring (Internal and external) and evaluation (internal and sourced out) of
European Commission s External Assistance. EDF and STABEX funds are an important part of the
financing of External Assistance. EC funds are under the direct management and accountability of the
National Authorising Officer (NAO) in a Decentralised Implementation System (DIS) in Sudan and
all external monitoring and evaluation will follow the methodologies for evaluation and monitoring of
all EC external funding. The approach, methods and tools can be easily trained and used by Sudanese
institutions and organisations.
Most projects and programmes are using the Project Cycle Management and an essential part of this
system is the Logical Framework analysis and the Log frames. The Log frame is a key element in the
design, the implementation, the monitoring and evaluation of programmes and projects. They are used
by most bilateral and multilateral donors. A first important step in implementation monitoring is to
check and review the existing Log frames and to reconstruct the logic of each intervention. This was
also recognised by the participants of the workshop.
http://ec.europa.eu/europeaid/evaluation/methodology/examples/guide1_en.pdf
http://ec.europa.eu/europeaid/how/ensure-aideffectiveness/documents/
rom_handbook2009_en.pdf
ttp://www.undp.org/evaluation/handbook/Arabic/PME-Handbook-Arabic.pdf
13
Section 1: The Design of a Results-Oriented Monitoring
and Evaluation System
Designing a Result-Oriented Monitoring and Evaluation and its reporting system is, first of all the
introduction of a systematic, harmonised approach to monitoring and evaluation and reporting of
results-based managed projects and programmes. It needs a number of interrelated steps. These will
be explained for the monitoring function and the evaluation requirements. These steps are valid for all
existing M and E directorates or units, operational and planned in Sudan.
The monitoring function is a matter of designating responsibility for internal monitoring, not the
organisation of additional units. The functional organisation for all monitoring matters can differ from
line ministries, project management), EU delegations, and other relevant bilateral and multilateral
organisations. In compliance with quality standards, internal monitoring must be suitably organised
and must possess appropriate human and financial resources. Therefore, an Internal Monitoring
Coordinator could be appointed, who is reporting within line ministries towards their institutional
partners such as the Ministry of International Cooperation, the NAO and the EU Delegation (Task
Manager). A project Steering Committee is often in place for EDF and STABEX projects and
programmes having the above mentioned partners as members.
A monitoring exercise starts with a monitoring mandate for the internal monitoring coordinator and
his team of monitors to prepare the annual monitoring work plan, implement the work plan, manage
the process and assure quality of all needed reports. The following reports are mandatory for these
projects and programmes: inception reports, monthly progress reports, quarterly reports, synthesis
reports, ad hoc reports and final reports. Furthermore, information can be found in external ROM
reports, mid-term reviews, and final and thematic evaluation reports.
Step 3: Have the appropriate Human Resources and budget to finance the internal and external
monitoring function.
The internal monitoring coordinator shall clearly identify human and financial resources, divide these
resources and allocate, define missions, follow the procedures.
Step 4: Decide on what needs to be monitored, define selection criteria for projects and
programmes to be regularly monitored
Define selection criteria for internal monitoring, timing during project cycle, selection of projects that
are off-track, sectoral monitoring. The final decision as to what needs to be monitored by relevant
institutional structures is made by the internal monitoring coordinator and the steering committees. It
is important to coordinate with the NAO, the EDF unit and the EU delegation and to have a clear view
on the yearly external ROM mission programme. A section is reserved on external ROM; the
monitoring process used for all EC financed projects and programmes in the world.
Step 5: Make an annual rolling work plan for monitoring missions and reporting
14
Make a rolling work plan over 1 year period, with projects for monitoring and re-monitoring. Provide
5 days per project and two days for reporting.
Make sure that monitoring missions are fixed well in advance and agreed with relevant partners and
project management.
A regular communication of results in interim and annual reports can make a useful input for the
decision maker and management. Reports can be used for project and programme steering committees
to define remedial and corrective actions or be used for future programming. The reports are very
important to report to steering committee: if projects are implemented following the work plan, or
have difficulties or are off-track. The internal monitoring sheet as described in section 6 can be
modified and used in the different line ministries. In the case that an MIS system is available, all
projects and programme reports and the six monthly monitoring reports can be regularly updated and
shared with relevant partners. Lessons learned, proposed remedial actions and recommendations need
an operational follow up and can be checked during the next monitoring mission. Project quality
sheet and tracking sheets for contracts and programme estimates can be added to complete
the monitoring.
Development of checklists for processing/approval of procurement/tendering process
documents, contracts and programme estimates and addenda, payments, de-commitments and
closures are formats to be used for contract and implementation monitoring (see section 8),
not to be mistaken for Results-Oriented Monitoring.
The design of comprehensive evaluation systems do have similarities but are different in scope as was
explained earlier. In this manual the evaluation process concerns all project and programme activities
co-financed through and with EDF and STABEX funds.
The main goals of this type of evaluation are to:
Evaluation types
Evaluation can be performed at different stages of the project or programme life cycle.
15
Ex Ante evaluation: takes place before an intervention starts during the identification and
formulation stage. Often the analytical process is the result of a feasibility study, an Impact
Assessment process.
Intermediary evaluation takes place in the middle of on-going interventions, especially to eventually
redirect them when the project is off-track.
Final evaluations are made before the end of activities of a given project and programme.
Ex Post evaluations takes place after termination of the intervention. Two to three years after
implementation so that the impacts have had time to materialise. Their durability and sustainability is
taken into account.
To set up an evaluation function a clear decision is made on designating a responsibility. The structure
and functional organisation of the evaluation function can differ from one line ministry or
organisation to another. As for the monitoring function, also human and financial resources needs to
be in place, a clear definition concerning evaluation missions, responsibilities and procedures for all
protagonists have been clarified.
The overall coordination and follow up of evaluation activities (from planning to reporting
and use)
Promote quality and organisational learning through evaluation results
Help the other services to implement the evaluation policy
During the planning and programming process of evaluations the following questions need an answer:
An evaluation starts in principle with an evaluation mandate. This document describes the context of
the evaluation as well as the motivation for the evaluation, its objectives and the time table.
16
The way in which results shall be communicated
The deadlines
The quality criteria
The evaluation manager is the manager of the evaluation project. He/she is a member of the
evaluation function and organises the evaluation process. This can be done internally with other in-
house evaluation experts, or it can be sourced out. In the latter case the manager will prepare for a
bidding process for an evaluation with external consultancy.
The evaluation manager will report to an evaluation steering committee. Members of this committee
are high level experts with knowledge of the specific sector and its activities. They are not part of the
evaluation activities but are responsible as per the mandate to follow up on procedures and to validate
the procedures and reports.
The evaluation manager is responsible for all tasks and responsibilities before, during and after the
evaluation process. He/she will together with the Steering committee draft the Terms of Reference of
the evaluation project. More precise information is given in a later section.
Impacts
Global
objectives
Effects
Specific
Results
objectives
Outputs Operational
Réalisations objectives
Implementation
Objectives
Inputs objectives
17
The project level, the ‘project’ being the basic unit of programme implementation
Step 7: Making evaluation questions related to what needs to be evaluated, choice of approach.
Evaluation questions are derived from the evaluation criteria: relevance, effectiveness, efficiency,
sustainability and impact. The first step during an evaluation is to reconstruct the logic of the
intervention by linking the objectives to expected impacts, and by identifying relevant evaluation
questions. A later section will explain the drafting of evaluation questions and the Terms of
Reference.
The Terms of Reference explain to an evaluator what is expected from the evaluation and on which
information and other supports he/she can count. The evaluation criteria form the basis for any
evaluation.
Questions should be derived from the evaluation criteria and should be limited in number, targeted
and prioritised within the Terms of Reference.
The system of indicators is crucial to measuring progress and results over time. An analysis of
Objective Verifiable Indicators from the Log frame is made and other indicators can be required from
the evaluation team.
Step 10: Define methodology and tools for data gathering and analysis
The ToR will, especially when the evaluation project is outsourced, have a section explaining the
methodology and approach to be followed in data collection and analysis.
The next section will give broad guidelines on the data collection and analysis methods to be followed
by the evaluators. In the case of a tender for external monitoring, it is up to the contractor to fine tune
the suggested approach in discussion with the steering committee and the evaluation manager.
A classic evaluation will be implemented in six stages: reconstruction of the intervention logic, basic
data and information-gathering, structural surveys, in-depth interviews, case-studies and analysis and
assessment.
18
Error: Reference source not
found
The person in charge of the evaluation will establish an evaluation process which should at least in a
synthetic form figure in the evaluation mandate (see above).
19
Error: Reference source not found
The evaluation manager and the Steering committee will assess the quality of the draft final report.
They will evaluate the report on the following criteria:
The dissemination of results is organised, and sent to the different target groups. The evaluation
reports are communicated with the responsible, project or programme managers, the decision makers,
institutions, the beneficiaries and stakeholders.
20
Source: Eastern Recovery and Development Programme, Monitoring and Evaluation Workshop, December 2010
Guidelines
( Based on the Workshop Reviewing M&E Systems and Reporting Practices of EDF/STABEX supported
Programmes and projects in Sudan 19-21st December 2010, Findings and Recommendations)
What are the elements and parameters to look for when reviewing a Log frame?
Step 1. Check whether the Project Purpose is formulated as ONE objective, which describes why the
beneficiaries need the project? It is an objective (= positive, thus not formulated with 'reduced' ...),
which is achieved by the beneficiaries themselves by making use of the Results made available by the
project and the prevailing Assumptions. The purpose speaks about the 'utility' level.
The purpose level reflects the RELEVANCE of the project. Although the matrix does not reveal the
problems, it should be a mirror image of the problem analysis. Examples are: Improved income,
enhanced performance, assured security, revived business, secured employment, integrated in society,
etc.
The purpose will have to be specified by an OVI (Objectively Verifiable Indicator): a number of
parameters with target values (now and later) indicating the change the problem situation will have.
The indicator should provide an explanation on what is actually meant by the objective. It should
explain the change over time in quantity and qualitative description or typology of the verb (what it
21
means by 'improved', 'secured'?), the subject (what is 'performance', 'employment'?) and the
beneficiaries (for 'whom' and by 'who'?).
Mostly the beneficiaries are people with special needs in society. That is mostly also the interest of
the funding agencies, as public money should benefit society. But the beneficiaries could also be staff
in organisations, institutions, departments and units. A distinction is made between final beneficiaries
or end-users from direct beneficiaries or target group.
Quite often 'Log frames’ have several purposes formulated, which if you analyse them appear to be
either results or even activities. In that case you need to reorganise the Log frame and place the right
objectives at the right levels.
Next, we examine the Overall Objectives. These are less important for the project itself but provide
information on the context of the project. They inform us WHY the project is IMPORTANT to
society. There can be several. Preferably each stakeholder will relate to one or more OOs as they want
to see how the project contributes to their wider objectives. Some donors like to see one or more
Millennium Development Goals listed among them. Just check whether these are reasonably
connected to the purpose. The Assumptions at Purpose level are relevant for the Purpose to contribute
to the Overall Objectives but these are far outside the scope of the project and thus not really very
important to the project design. They also determine the context in which the project is situated.
Step 3: Then we check the Results or Outcome! Of course these outcomes are the next most
important of the Logical Framework Matrix because the project is responsible for making these
available at the beneficiaries.
• Each Result must be quantified and qualified with OVIs (Objectively Verifiable Indicators). Only
then do these become sufficiently specific in order to understand what they mean.
Step 4: Results plus Assumptions (at the same level in the matrix) should present a comprehensive
package enabling the beneficiaries to make use of them and reaching the Purpose.
Often we see Results formulated as an objective, but actually being an Activity. These are also called
Outputs, like e.g.: 'Training organised', 'Wells provided', 'Information disseminated', 'Rural banks
established', implying a benefit, but not making it explicit. We discover the difference between an
Output and an Outcome by checking whether the objective can be done (Output) or only achieved
(Outcome).
Step 5. As already mentioned the Assumptions at Result level are important to position the services
made available by the project and the other services required to benefit from them. Assumptions are
also positive objectives to be achieved and made available to the beneficiaries by sources other than
the project.
Most Logical Framework Matrixes miss Assumptions. People tend to think that the more
Assumptions are mentioned, the riskier the project. However the opposite is true. If you mention these
you can monitor and anticipate them whereas if you don't mention them they show up by surprise ...
and can damage the project success.
22
The Assumptions at the Activity level are most important because these affect the Results for which
the project is responsible. Again, these Assumptions are often ignored but they are crucial to assess
the potential effectiveness of the project. Monitors often need to think 'out of the box' to imagine the
situation of the beneficiary and discover important Assumptions.
The Pre-conditions are objectives that must be in place before the Activities can start. We usually see
Pre-conditions at the beneficiaries (e.g. 'beneficiaries are prepared to pay for services' or 'ownership
assured') and at the service deliverers or 'suppliers' (e.g.: organisation able and qualified to implement
the Activities'; 'contract signed'; 'funds available'; 'supportive policy').
A properly managed project preparation phase can be mentioned as a Pre-condition.
23
Section 2: Baseline Studies and the system of indicators
for results
A baseline study is made at the beginning of the interventions. It gives basic indicators of the actual
state of the overall situation, the need to make progress and change by intervention; it measures the
actual state of the situation on site. Baseline studies for EDF/STABEX projects are conducted during
the design phase. Some baseline information is available, but for some projects it is necessary to
conduct or update a baseline prior to a new project intervention. Often this will emerge during the
inception phase of the project when the whole project situation and the Log frame is updated and
reviewed prior to full implementation.
Example in Sudan:
SIFSIA-N has contributed a food security module to the national integrated household survey under
preparation by the Ministry of Finance & National Economy (Poverty Reduction Strategy Unit), the
Central Bureau of Statistics and the Ministry of Social Development & Welfare. SIFSIA-N drew
upon technical assistance from FAO headquarters (ESAF and ESS to develop the food security
module. The survey (funded primarily by the African Development Bank (ADB) is implemented by
CBS in both northern and southern Sudan between March-April this year. It will provide very useful
baseline information on poverty since it will generate household budget and expenditure data and
analysis in both rural and urban areas, including the transfer of remittances which is an area little
understood, for the first time on a nationwide basis.
A baseline study is organised to have a precise measurement of the different chosen baseline
indicators.
Baseline indicators reflect the state of economic, social and environmental situation at a given time, at
the beginning of the intervention, against which changes will be measured. This collection of data can
then be compared with a study of the same characteristics carried out later in order to see what has
been changed.
There are two types of baseline indicators: context baseline indicators and impact related baseline
indicators.
Context indicators are used for an entire territory, population or population category. They do not
apply to the implementation of the programme and its effects. They always apply to the entire eligible
territory or target public, making no distinction between those who were affected by the programme
and those who were not.
In contrast, programme indicators concern only the part or category of the public or the part of a
territory which was actually affected. Their aim is to trace the direct and indirect effects of the
programme as far as possible.
24
In the context of monitoring and evaluation, a programme indicator can show that a specific
intervention is a success or that another is a failure. In contrast, a context indicator can show
that a specific intervention is still relevant, or that another no longer has a raison d’être.
Final specific Overall impact Overall effect for the entire Direct and
objective population concerned (direct indirect
and indirect beneficiaries) beneficiaries
“Effect” can be defined as any change caused by the implementation of the programme, whether
direct or indirect, immediate or long-term. The effects therefore cover results and impacts. In all
cases, surveys are useful for observing the apparent effects (gross effects), but not the real effects (net
effects) of a programme.
Account must be taken of windfall effects: a beneficiary may have made a definitive decision at the
time when he discovered that the programme would help him (he benefits from the aid), and there
may be substitution effects (in the case of interventions targeting individuals or groups of individuals)
and displacement effects (in the case of interventions targeting geographical areas).
25
Baseline studies in different contexts and project situations
26
A first condition to start a baseline study is to perform an in depth stakeholder analysis for the sector,
the programme, the project under review. The stakeholder analysis can be based on the following
diagram.
The baseline study (will then be based on the free and adequate participation of stakeholders and
focus groups. Several approaches can be used, stand alone or in combination, document review, field
visits, quantitative and qualitative observations, interviews, focus group discussions and geo-data
analysis. Primary data are generated from statistical analysis, if statistics are significant.
Example: The International Labour Organisation (ILO) has organised a worldwide survey in the ILO
organisation about knowledge sharing. The baseline study was based on a worldwide on-line
questionnaire, filled in by ILO staff. The purpose of doing these baseline studies is to find out what
the ILO does well and where there are weaknesses, in order to better focus resources and efforts. The
baseline study tool can be used in the future to measure progress.
Methodology
To conduct a baseline study, a methodology that is participative will be used. The study will take
between 3-4 weeks of fieldwork in addition to analysis of documentation. During fieldwork there are
meetings planned with the local project coordinator, the local partners with responsibility for
implementing the project. Tools used are interviews/ group discussions/ observing the operations of
the project - included as parts of visits to the different geographic locations of the project, visiting
local governmental institutions and organisations. To obtain information the common techniques for
social studies are used: documenting, observing, interviewing, and in addition to this; case studies and
life histories. But concerning the study of the social and economic realities of the beneficiaries and
their families, it is necessary to use an appropriate sample of the population and interview a
representative selection of beneficiaries. Approaches and tools are explained in the following chapter.
Information about indicators and sources for information is found in the Log frame matrix. Data
sources are listed in the second column (Objective Verifiable Indicators) and the third column of the
Log frame shows the Means of Verification. The indicator explains what information will be
collected, the means of verification identifies where the information will come from.
Primary data is collected by using surveys, meetings, focus group discussion, interviews or other
methods that involve direct contact with respondents.
Secondary data is existing data that has been or will be collected. Secondary data can be found in
MTR, evaluation and monitoring reports, data collection by organisation and government, routine data
collected by institutions participating in a project or programme (Health Centres, schools). This is
good secondary data which could not be replicated without high costs, through new baseline studies
Example: During stages of emergency data about emergency food needs is very important. The
Emergency Food Needs Assessments (EFNA) can give immediate criteria for baseline data.
27
The Available Information
Evaluation
Once the design and methodological issues are solved, they should be summarised in a study plan and
a budget.
Proposed outline
Summary
Back ground and purpose of study
Description of operational design and target beneficiaries
The objective
Data sources
Data collection
Units of study
Use of Secondary data
Primary data collection methods and techniques
Sampling description
Design
Questionnaire
Pre-test
Fieldwork
Field work team
Required training
Time table of fieldwork
Quality control and supervision
Data processing and analyse
28
Data cleaning
Data entry and processing
Frame work for analysis
Training in data management
Reporting
Outline and format Study report
Presentation and dissemination of results
Annexes
Budget
Operational design
Guidelines
( Based on the Workshop Reviewing M&E Systems and Reporting Practices of EDF/STABEX supported
Programmes and projects in Sudan 19-21st December 2010, Findings and Recommendations)
The first principle is to enhance the information system of the Public Administration with results
information on the project, programme and policy level. This results information must move both
horizontally and vertically in the organisation. This can pose political challenges. The demand for this
information needs to be mapped out clearly, as well as the responsibility at each level:
This is the first challenge that many organisations and agencies find it difficult to share information
horizontally. Information is already difficult to move vertically, due to strong political and
organisational walls between one part of the system and the other.
Given the scarce resources and ambitious development objectives, development partners on all levels,
multilateral, regional, country and governmental level need to leverage resources to achieve the
desired goal. When resources for Monitoring and Evaluation are diminished, partners need to find
solutions by combining resources even during times of input constraints. Often programme and
projects have a budget line to finance monitoring, evaluation and auditing.
29
Section 3: Toolkit for monitoring and evaluation
There is a difference between various forms of monitoring; not all monitoring is results-based.
Activity or performance monitoring has an accounting function keeping track of the activities
completed. Operational accounting have specific tools and reporting formats.
Impact monitoring: is aimed at measuring the ultimate effect of the activities in terms of changes
knowledge and skills (adoption rates). Impact of results and induced changes are measured and
reported.
Financial monitoring: keeps track of expenditures and assesses if they are in line with the budget.
Furthermore, they can be associated with contract management and monitoring.
Results-Oriented Monitoring or ROM: The EC services have coined the word ROM, to indicate
their system of external Result-Oriented Monitoring of all EC financed projects and programmes
(over 1 Million Euro). Full explanation is given in section 6. It is important to stress the importance of
EC-ROM as a reporting tool but also as a Results-based methodology for internal monitoring.
• Monthly reports from line ministries: data to be used in Monthly coordination meetings
Management/SC meeting
• Mid-term review (an on-going evaluation about project activities at mid stage of project).
30
Some classical tools for data collection during monitoring and evaluation
Participatory Learning and Action (PLA) is a particular form of qualitative research used to gain an
in-depth understanding of a community or a situation. It is based on the participation of a range of
different people, including people from the community affected by the project or programme. The aim
is for people to analyse their own situation, rather than to have it analysed by outsiders, and for the
learning to be translated into action. This makes it a particularly useful tool for planning, monitoring,
review or evaluation of any kind of community development. It used to be called PRA or
Participatory Rapid Appraisal or Rural Participatory Appraisal and was initially used mainly for needs
assessment in rural communities.
Triangulation: This is a method of cross checking qualitative information. Information about the
same project can be collected in different ways and from at least three sources to make sure it is
reliable and to see whether it is not biased.
Mixing tools and techniques: Using different tools and techniques gives greater depth to the
information collected.
Flexibility and informality: Plans and research methods are semi-structured and revised as the field
work proceeds.
In the community: Most activities are performed jointly with community members or by them on
their own. This makes this tool particularly optimal for monitoring and evaluation of rural
development and community related development projects.
On the spot analysis: The expert team reviews and analyses its finding to decide how to continue.
Group interview is a technique whereby several people with homogenous characteristics participate
and provide qualitative information in a targeted discussion.
This technique was initially used in marketing circles to analyse the impact of publicity and marketing
strategies, and it is particularly constructive in investigating themes which are the subject of diverging
opinions which need to be bridged, or in untangling the threads of complex issues which are the
subject of numerous different interpretations.
This tool enables the collection of the perceptions of all those concerned by a project or programme
through the application of group participation techniques.
31
In a relatively short time-frame, this technique enables the collection of a large amount of in-depth
qualitative information concerning the opinions and values of those interviewed.
Grouping several people together encourages a general position to emerge, avoiding extreme
opinions; the group provides a kind of “social quality-control”.
The questionnaire survey technique was developed by opinion poll institutes between the wars and is
often used today. It is based on standard questions asked of a sample of individuals who are
representative of a population or, occasionally, the entirety of a population.
When applied to the field of evaluation, this tool serves mainly to collect information. The questions
should be associated with descriptions, standards or causal links.
Strengths and weaknesses: One of the strengths of this type of information collection is that large
numbers can be covered, making it a good tool for the implementation of quantitative analysis.
Nowadays there is both hardware and software which enables standardised and rapid processing of
responses.
Individual interviews are a favourite qualitative technique aiming to collect personal opinions and
information on a specific project or programme, concerning everything from context to
implementation, impact or results. Individual interviews can come in various forms, often used are the
semi-directive interviews.
It can also be used in cases where a statistical study would be technically impossible, or would not be
representative.
This technique is simple and transparent, and is popular in finding a consensus in particularly
sensitive areas or areas of conflict.
The results are immediately clear to all, and can be used to highlight the more striking areas of
disagreement or those requiring additional information.
32
Tool 5: Logical framework analysis
SWOT analysis (Strengths, Weaknesses, Opportunities and Threats) is a classic tool in strategic
analysis which has been developed since the 1950s, and is a support technique to decision-making
which focuses on strengths and weaknesses (internal perspective) and opportunities and threats
(external perspective).
In the initial stages of the analysis of a situation, SWOT analysis enables strengths and weaknesses of
an organisation to be identified, so that the determining and prevailing influential factors can be
highlighted. Relevant strategic lines can be developed from the project/environment (or
programme/environment) system of relations.
The undeniable strength of this tool is its relative simplicity of use, which enables us to break down a
situation and establish an initial list of issues. Its weakness also lies in this simplicity, which may lead
to analyses and conclusions which are too hasty and too subjective. There is, for example, no
weighting of the items entered into the chart
33
This tool can be used to gather many ideas quickly from a group of people by letting them freely
express their creativity and critical thoughts. It can be often used as a first step in a discussion that is
then followed by other methods. In principle, brainstorming can be done individually or in a group.
It's a quick and enjoyable process. It stimulates involvement and cross-fertilisation of ideas. However,
most ideas are contributed from a few quick-thinking people. The method can work with small or
larger groups and can take as little as five minutes, depending on the subject, detail needed and
number of people. This method is commonly used in combination with other methods, for example, to
start a focus group session.
This technique is among the least standardised and offers the option of various approaches. In basic
terms, case studies are based on in-depth analysis of data collected on a specific case. The techniques
for collecting the data are both quantitative and qualitative.
Case studies provide considerable illumination in complex areas, and involve awareness of the
concrete application of programmes on the part of decision-makers who are often far-removed from
the reality of implementation.
The cost of this approach necessitates a restricted and relevant choice of cases to be studied. This
approach is less appropriate when it comes to measuring the extent of impact or inferring causality.
Further reading:
http://portals.wi.wur.nl/msp/
34
Section 4: Preparing the evaluation process and the
drafting of Terms of Reference
As we have seen in the other chapters, the most important task for an evaluation manager, in close
collaboration with the Evaluation Steering Committee, is to draft the Terms of References of the
Evaluation process. This chapter will explain how appropriate Terms of References are drafted.
Evaluation process
Evaluation is a management tool that helps in the decision making processes. Evaluation proposes
objective judgments referring to explicit norms and criteria which help to improve the quality of the
EDF and STABEX projects and programmes in Sudan.
Evaluation is part of a broader iterative process. Managing this process well is necessary to obtain
good evaluation results.
The process must be conducted correctly if optimum use of the results is to be achieved: it is even
subject to an assessment, as is the quality of the final evaluation report.
A steering committee involved from the design stage through to the dissemination of the results must
therefore be the rule for EDF and STABEX evaluations.
Evaluation questions should be well chosen and well formulated. One should target the questions and
narrow down the scope in order to obtain questions that can be answered and the answers of which are
useful in the decision making process.
The Mandate
Before this, it needs to be clear that a Mandate is given for the Monitoring and Evaluation Unit in the
line ministries to evaluate a given project or programme.
Mandate example:
35
The evaluation process
An evaluation manager for a given project or programme is appointed to conduct the evaluation.
He/she sets up the reference group, writes the Terms of Reference and recruits the external evaluation
team, if needed.
The responsible person will decide who will participate in the three stages of the evaluation:
The evaluation exercise and hence also the Terms of Reference will comprise:
The context and the reason for the evaluation (obligations, legal context, contractual)
The target, who is concerned and who will use it
The available financial resources (budget, costs, time of staff)
The evaluation field
The time table
The key evaluation questions
Details on the sources of available information
Reports, quality assurance, validation and timing
Contractual, financial and administrative information during an external evaluation
Wider use of results and reports, dissemination and reporting
Requirements and issues to be included in the final drafting of the Terms of References
The evaluation manager will work closely with the Steering Committee to have clear answers on the
following issues:
The context of the evaluation: What is the legal and administrative context of the evaluation? Is it an
obligation from a contractual or financing requirement? Or is it the socio economic context and will
more emphasis be placed on impact and results? Most evaluations are recommended by the Court of
Auditors report, MTR or are simply a requirement by the Financing Memoranda.
36
The scope of the evaluation
The evaluation will focus on the whole intervention or programme, or only on one aspect. There is
reference to all previous evaluation or assessment reports. The focus is also related to the reference
group of beneficiaries and the timing.
Effectiveness:
achievement of
objectives Unexpected
results/impact:
positive,
negative
Relevance: Sustainability:
of project to sustain results
Evaluation after project
needs
focuses phases out
on:
Alternatives
: other ways Design:
to logical &
addressing consistent
problem Causality:
factors
affecting
performance
37
Key evaluations issues
Evaluation field
The evaluation field can be widened or deepened. A decision is made on the geographical area of the
intervention (rural versus urban) and the reference period (programme from 2000-2005).
It should be remembered that the evaluation field can specify a more general evaluation, or a more
subject-oriented and specific evaluation (e.g., female beneficiaries from the Northern Province).
Information available
As was discussed before a stock is made of all primary and secondary information available. An
action is taken to ensure that evaluators do have access to all information even the more sensitive
ones. An inventory of all available sources of information is made and evaluators are authorised to
have access.
Evaluation issues
The choice of having an internal or an external evaluation. When the evaluation is internal, it is
performed by the internal evaluation experts. It has the advantage that there is a direct return, it can be
used for training and organisation learning. Internal expertise and staff is mobilised. Often internal
evaluation is used for ex ante evaluation. To source out the evaluation to external evaluators, can give
more objectivity to the evaluation, it gives more focus to accountability. External evaluation is more
used for ex post and final evaluations. Mobilisation of external evaluators can also impact on the
38
optimisation of scarce resources in the evaluation units of line ministries and Programme
Coordination Units.
Deadlines
A precise time plan and work plan is made and needs to be respected. All partners have been informed
about the upcoming evaluation project and are asked to fully cooperate with the evaluation team.
Quality criteria
The evaluation quality standards for evaluation are respected and all service and reporting are
assessed following a quality evaluation grid at the end of the project
The evaluation manager with the designated steering committee will decide, before drafting the Terms
of Reference on evaluation criteria and related evaluation questions.
Evaluation criteria
As far as a good logical framework is available and still valid, the evaluation manager may refine the
issues to be studied into evaluation questions. The five evaluation criteria are relevance, efficiency,
effectiveness, impact and sustainability. Some evaluation projects are focusing on only 1 or 2 criteria,
or have more criteria like utility, cost-effectiveness, ownership. Most evaluation projects starts with
the reconstruction of the intervention logic. The evaluator is expected to reconstruct the original
intervention logic of the project or the programme. This is needed to have an insight into the validity
of the apparent causal assumptions involved.
Food security in
Zimbabwe (DG ECHO)
Operational objectives Specific objectives Overall objective
EstablishTechnicalassistance in thefieldto
coordinatethe activties, evaluateneeds,
assessprojectproposals
Preventmalnutrition and famine
Distribute seed and ensurethe monitoring ofoperations
in the most vulnerablegroups
faced withthe food security
Evaluation questions Crisis in Zimbabwe
Support emergency agricultural
Observe and monitor rehabilitation
the living conditions
of the population
Evaluation questions are derived from the evaluation criteria; relevance, effectiveness, efficiency,
sustainability and utility. The Improve
first step of an evaluation is to reconstruct the logic of the intervention
food security in rural
that is evaluated by linking the objectives of the programme to expected impacts, and by identifying
communities
evaluation questions.
39
The reconstruction of the intervention logic is not necessary straightforward since it necessitates the
definition of causalities between the concrete actions that are implemented and the expected results.
The art of evaluation lies here, in the identification of the key themes (i.e. the causal links between
certain intervention factors) and in asking the right questions. The evaluator then uses the collection,
analysis and information summary techniques which isolate the explanations for external factors. In
short, the quality of decisions depends on the evaluation quality.
Specify the questions
The most difficult part is to ask the right questions and to formulate them well. Evaluation questions
should be specified on the basis of the evaluation criteria and the causalities found during the
reconstruction of the logic of the intervention.
Questions can be descriptive (What has happened?), causal (What is the Relationship with the
intervention?), or normative (Is the effect satisfactory?)
Choosing and targeting precisely the questions, is the difficult part. When this exercise is done in a
participative way with members of the steering committee and staff from the evaluation unit, the
initial list of key issues and related evaluation questions will be big. To establish a final list of
evaluation questions, it is good to have a close look to the key themes to be evaluated and to identify
external factors that could influence the outcome of the project programme. External factors that
cannot be influenced do not have a place in the priority questions during the evaluation process. This
will help to have targeted and priority questions.
Improve
Stakeholders’ interests
List of themes
Decisions to be taken Priority questions
and questions
Political context
40
Some samples of evaluation questions to be included in the ToR
Sustainability
What is the extent of ownership of the asset; in particular the transfer of the equipment to the relevant
entities at the end of the project, in most cases the public utilities?
What is the financial sustainability of the asset, and in particular the cost-recovery system put in
place, and its efficiency; in this regard, the ownership of the entity in charge of operation and
maintenance is crucial?
What is the sustainability of the installations in terms of built capacity; in terms of policies adopted,
human capacity trained, new institutional structures created, private sector participation, etc.?
The technical part of the Terms of Reference should contain all the above mentioned elements
necessary to help the evaluator in their research and analysis, and also the evaluation questions
themselves. It shall therefore comprise:
41
- the evaluation questions (descriptive, causal and normative), which should be limited in
number, clearly formulated and well targeted.
It is very important to have well defined evaluation questions. The following issues can be of help in
thinking about them
- who are the stakeholders (of the evaluation and of the evaluated intervention);
- which decisions have to be taken;
- what is the political context;
- what is the available budget and the timetable;
- what is the probability that the question can be answered;
- what is the probability that answers to the questions are used in the decision making process;
- which type of evaluation (ex ante, intermediary, ex post);
- whether the evaluation is formative (serving management and internal learning) or summative
(aiming at accountability).
In summary:
- The Terms of Reference explain to an evaluator what is expected from the evaluation and on
which information and other supports he/she can count;
- Questions should be derived from the evaluation criteria and should be limited in number,
targeted, and prioritised within the Terms of Reference.
42
Outline for Terms of Reference for a Final review of EDF and STABEX
projects and programmes
Background
Context
Aims
Instruments of intervention
Funding
Actions launched to date
Previous evaluations, studies and reviews
The evaluation
Scope
Main evaluation questions
o Intervention logic
o Relevance and quality design
o Efficiency
o Effectiveness
o Impact
o Utility and sustainability
Location
Starting date
Period of execution
Work-plan and timetable
Budget
43
Requirements
Personnel
Reports
Inception report
Interim report
Draft final report
Technical annexes
Final report
- A Final Report upon receipt of comments from NAO, MoFT, SENIS PMU and ECD on the Draft
Final Report. The NAO, MoFT, SENIS PMU and ECD will have 20 days to provide additional
comments or approve the final report.
All reports will be sent in 5 copies, written in English, and provided in editable electronic form, as e-
mail attachments, and must be usable with computer software compatible to the main clients and
stakeholders. The final report including all attachments has to be provided on CD ROM in editable
form.
Guidelines
( Based on the Workshop Reviewing M&E Systems and Reporting Practices of EDF/STABEX supported
Programmes and projects in Sudan 19-21st December 2010, Findings and Recommendations)
44
Organisational needs for a new Results-based Monitoring and evaluation system in Sudan
As was discussed in the workshop and reflected in the recommendations, the system is based on four
pillars:
Ownership
o Identify the mandates of the specific institutions and assess the structures of the
institutions in order to conduct relevant capacity and competencies at Federal and
State levels
o Need to formulate a standard Log frame of institutions with objectives and indicators
o Adapt conducive and agreed structures for M&E staff to share experiences from other
countries
o Target the highest levels of decision-making to make them understand the importance
of ownership, transparency and accountability and the role of Results-Oriented M&E
and Reporting in this and not just detecting mistakes.
Management
o Design a simple system that can be communicated easily to M&E Directorates and
units as agreed upon by development partners and the government.
o To work deliberately for harmonisation and coordination to have unified systems and
standards
o There is need to agree on templates for M&E processes which should be distributed
for improving coordination and synergies.
Maintenance
Appropriate budgets and dedicated human resources should be included in work plans and
overall budgets of programmes, but this requires political commitment at the highest decision-
making
Credibility
o Conducting M&E workshops and sharing experiences like this particular workshop
did
o Conducting training for of state staff through technical support of capacity building
programmes
45
o Conduct seminars and workshops to increase awareness at all these levels
Blue print of Institutional arrangements for RB M&E SYSTEM in Sudan (State level)
Blue print of organisational arrangements for RB M&E SYSTEM in Sudan (Line ministry level)
46
Section 5: From Evaluation and Monitoring questions to
Indicators
System of Indicators
The use of indicators of progress, results and change are important for every systematic approach to
Results-Oriented Monitoring and Evaluation. In this chapter the use of appropriate indicators will be
explained.
Management wants the monitoring and evaluation system to be designed in such a way that changes
can be observed and comparisons can be made. In this respect, indicators have to be determined.
Objectively Verifiable Indicators (OVIs) describe the project’s objectives in operationally measurable
terms by specifying Quantity, Quality, Time and Place (QQTP). As earlier stated specifying OVIs
helps to check the feasibility of objectives and forms the basis of a project monitoring and evaluation
system. OVIs are formulated to answer the question “How would we know whether or not what has
been planned is actually happening or happened? How do we verify success?”
When designing a monitoring or evaluation system, the indicators are determined in order to answer
the management’s questions (and certainly not the other way round). This means that management has
to look for variables that can actually measure the phenomena of management and decision-makers
interest.
The information that is collected with the monitoring system can be divided into two main parts: the
information generated within the organisation - i.e. monitoring of action and of results, and the
information generated outside the organisation - i.e. the monitoring of reaction and context.
For the information that is gathered within the organisation, the phenomena of interest can often be
measured directly. The indicator takes the form of aggregated data. Such is the case with financial
figures, used materials, production levels etc. Measuring the target group reaction is altogether a
different matter. Direct measurement is often not possible because the target group is far too large.
Somehow an indirect indicator or an estimate has to found, based on a sample survey, or by
measuring a related phenomenon. For example, if we want to measure the increase in income in a
certain area, the change in expenditure on certain items like housing, food and health may be used as
an indirect indicator.
Sufficiently valid, i.e. have a causal relation with the management’s question (validity);
Measurable with an acceptable degree of accuracy (quantitative, objective);
Sensitive enough to changes of the phenomena of interest (sensitivity);
Simple (the data must be easy to collect) and as efficient as possible regarding the costs (cost
effective).
Indirect indicators should be used with care. Management have to be aware of the limitations in scope
and validity of the indicators used in the system. Management is interested in getting answers to the
management’s questions, for which the indicators are a means. In the table below a classification of
some types of indicators is presented for three different types of development interventions, which
may provide an overview of the kind of information that is usually collected.
47
A classification of indicators according to intervention type
Monitoring type Supply/Product delivery Service delivery Infrastructure
construction
* Resources used * Resources used * Resources used
Monitoring of
* No. of activities realised
action
* Frequency of contact
* Product quality * Coverage of service network * Completion rate
Monitoring of
* Quantity distributed * Respect for time scheme * Timeliness
result
* Quality of delivery * Quality of the works
Marketing: Beneficiary contact: Use:
* Appreciation of product * Adoption rates * Rate of use
Monitoring of
* Utilisation of product * Satisfaction level * Users satisfaction
reaction (purpose
* Levels of production * Maintenance
level)
* Administration
* Contribution of Users
* Competitive position * Client environment * Distribution of benefits
* Market fluctuation * Economical setting * Public-administration
Monitoring of * Economic policy * Institutional setting policy
context (overall * Inflation rate * Climate * Institutional setting
objective level) * Labour market * Etc. * Economical setting
* Political stability * Etc.
* Etc.
Once project management has clearly formulated monitoring questions and decided upon variables to
be used for measuring what management wants to know, it becomes possible to organise the flow of
information. The information flow runs from the collection of data up to the moment the document
has been written and has been sent to the persons who need it for decision-making. The table below
indicates how to organise the flow of information based on a list of information needs for monitoring
and what to do at each level of the agreed LFM.
48
to steer and control processes.
According to the comparability of the information: specific or generic indicators, key indicators;
According to the quantification method and the use of the information: monitoring and evaluation
indicators.
The system of indicators for EDF and STABEX projects and programmes
In general, indicators are associated with the different levels of a programme’s objectives.
49
Linking programming and
evaluation
Global
Impacts
objectives
Specific
Results
Effects objectives
Operational
Outputs
objectives
Inputs Objectives
Implementation
39
Source: Adapted from EC 1997
For the programmes financed under EC Funds, a three-tiered structuring approach is normally
adopted:
The overall level of the programme with which the overall objective is associated. This level
comprises priorities which break down the overall objective into its main strategic
dimensions;
The measures level, which corresponds to the basic unit in programme management, with
each measure subject to a specific management tool;
The project level, which is the implementation unit of the programme.
50
Effectiveness: the measurement of indicators must not require too much energy or time, or too many
resources. The system must remain economically effective. In this field, “better” is the enemy of
“good". The most refined indicators are often those for which data provision is the most difficult and
costly. The purpose of the indicator is not to perfectly describe a situation but rather to provide a
stakeholder/decision-maker with a relevant indication.
Traceability: it must be possible to measure the values of the retained indicators at regular intervals.
If the lapse of time between two measurements is too long, the evaluation will not be useful as a
decision-making tool, and will instead prove to be stilted and useless.
Sensitivity: it is essential for the programme’s influence on the objective to be reflected in the
indicator. The indicator must therefore prove to be sensitive to the intervention action. At regular
intervals, the efforts made will be reflected in positive or negative movements in indicator value.
Traceability
Sensitivity Effective
Clarity
ness
Relevant
Accepted (by the stakeholders, etc.)
Credible (for non-experts, easy to interpret)
Easy to monitor (low cost)
Robust against manipulation 50
It is most difficult to link your evaluation question to the analysis of chosen indicators of progress,
result and change. The last part of the evaluation process after data analysis is the judgement to
answer the different evaluation questions. That is why evaluation questions have to be linked to
indicators.
51
To evaluate, monitoring and context
indicators are necessary
Evaluation
Monitoring
Specific
Inputs Outputs Results
impacts
Monitoring indicators Overall
impacts
Specific
Outputs Results
impacts
Context indicators
55
Budget absorbed
Efficiency
New businesses created
To what extent has EC support improved the capacity of the educational system to enrol
52
pupils from disadvantaged groups without discrimination?
The judgement criterion (also called reasoned assessment criterion) specifies an aspect of the
evaluated intervention that will allow its merits or worth to be assessed in order to answer the
evaluation question, For instance:
Capacity of the primary school system to enrol pupils from ethnic minority X with
satisfactory quality.
The judgement criterion gives a clear indication of what is positive or negative, for example:
"enhancing the expected effects" is preferable to "taking potential effects into account".
The question is drafted in a non-technical way with wording that is easily understood by all, even if it
lacks precision.
The judgement criterion focuses the question on the most essential points for the judgement.
Yet the judgement criterion does not need to be totally precise. In the first example the term
"satisfactory quality" can be specified elsewhere (at the indicator stage).
It is often possible to define many judgement criteria for the same question, but this would complicate
the data collection and make the answer less clear.
In the example below, the question is treated with three judgement criteria (multicriteria approach):
"capacity of the primary school system to enrol pupils from ethnic minority X with
satisfactory quality"
"capacity of the primary school system to enrol pupils from the poorest urban areas with
satisfactory quality"
"Capacity of the primary school system to enrol girls ".
"capacity of the primary school system to enrol pupils from ethnic minority X with
satisfactory quality"
"primary school leavers from ethnic minority X pass their final year exam "
The first judgement criterion is faithful to the question, while the second is less so in so far as it
concerns the success in primary education, whereas the question concerns only the access to it. The
question may have been badly worded, in which case it may be amended if there is still time.
An indicator describes in detail the information required to answer the question according to the
judgement criterion chosen, for example:
53
Number of qualified and experienced teachers per 1000 children of primary-school age in areas
where ethnic minority X concentrates
In the examples below three indicators are applied to a judgement criterion ("capacity of the primary
school system to enrol pupils from ethnic minority X with satisfactory quality"):
"Number of qualified and experienced teachers per 1000 children of primary-school age in
areas where ethnic minority X concentrates"
"Number of pupils per teacher in areas where ethnic minority X concentrates"
"Level of quality of the premises (scale 1 to 3) assigned to primary education in areas where
ethnic minority X concentrates ".
54
Section 6: Results-Oriented Monitoring (ROM) for EC
external assistance (projects and programmes)
In this section the manual explains the EC ROM system, that has to be seen as an external outsourced
ROM system, not to be confused with the internal Results-Oriented Monitoring. ROM was coined by
the EC services some 15 years ago, and by using the term ROM in this section, it means the EC ROM
system.
There is an intense discussion about the effectiveness of development cooperation. The growing
demand for development efficiency is largely based on the realisation that achieving good ‘products’
is not enough. Today the question is not ‘What have we done?’, but: ‘What have we achieved in terms
of results?’. Managing for results emerged as a ‘new’ approach which does more than attempt to focus
on ‘results’ and to align development cooperation mechanisms. Management for results also tries to
structure the somewhat confusing terms used by the international development community when
dealing with results.
The European Community presently commits billions of Euro per annum to external assistance
programmes. The primary objectives of the Community’s assistance programmes are to
reduce poverty, strengthen democracy, human rights and gender equality, support integration
into the global economy, maintain peace and stability and facilitate socially and
environmentally sustainable economic development.
Within the framework of these wider values and in line with the Millennium Development Goals, the
Community has identified six areas where it believes it is able to add genuine value and where its
interventions complement and reinforce the efforts of other bilateral and multilateral donors: the link
between trade and development; regional integration and co-operation; support for macro-economic
policies and equitable access to social services; transport; food security and rural development; and
institutional capacity building related particularly to good governance and the rule of law.
Evolution of EC ROM
The EC has been opting for a long time for a results-oriented external co-operation, notably including
results/progress monitoring, project performance enhancement and quality assurance of the
operations. ROM covers a well defined part of this strategic results management, with systematic
assessments during the project life cycle as well as ex post.
ROM supports the ambitious efforts of the Paris Declaration to improve aid practices and
effectiveness, designed to help developing countries achieve the Millennium Development Goals
(MDGs) by 2015. The European Consensus has made these commitments more concrete for the EC
and all of the EU Member States. ROM criteria and sub-criteria already address the key thematic
issues (human rights, gender equality, democracy, good governance, children’s’ rights, indigenous
people, conflict prevention, environmental sustainability and HIV/AIDS).
55
Simultaneously, the EC ROM system has been expanded:
In length of the project cycle, by having external monitoring during the project life cycle, but
also ex post.
What is EC ROM?
ROM is the regular review (second opinion) of how a project is progressing in terms of resources use,
implementation and delivery of results in order to help the project management achieve final
objectives.
The major current monitoring systems focus on activities and outputs (e.g. training of primary school
teachers), ROM additionally focuses on results including outcomes (e.g. the number of children, who
are taught by these teachers) and the quality of the teaching provided and impact (e.g. the increased
number of children, especially girls). Also this external monitoring is unlike other internal monitoring
done by those directly involved in a project or programme, ROM clearly separates the management
and the monitoring function, ROM is done by independant experts who thus can have a more
objective view of the project’s performance. For the past ten years, every three years ROM missions
are contracted on a regional basis. More then 10000 Monitoring reports (on 6000 projects since 2000)
are in the database and extensive qualitative and sectoral analyses have been made. In the diagram
below you see an evolution of performance by sector.
100%
80%
60%
40%
Very good (a)
20% Good (b)
0% Problems (c)
Serious deficiencies (d)
N/A
56
Conventional EC ROM of projects/programmes
Conventional ROM is based on the methodology as described in the Monitoring Handbook and
applies to all projects, programmes, regional programmes and thematic budget lines, whether
managed by the ECHQ or EUDs. As the methodology is well established, it will not be described at
length here.
The Log frame is the foundational tool for monitoring standard projects/programmes. However, the
use of the Log frame as an effective tool for EC ROM does depend on a number of factors, including:
The existence of a Log frame: Most of the projects/programmes are now designed with the
use of the Log frame approach, and this situation has dramatically improved in the past years.
The quality of the Log frame: How, and by whom, was the Log frame constructed? Were the
primary stakeholders involved? Do the risks and assumptions adequately address their interests?
Do the indicators fulfil the accepted standards for identifying Objectively Verifiable Indicators
(OVIs) in terms of quantity, quality and time factors?
The continuing relevance of the Log frame: Has the Log frame been updated? Does it
reflect the changes that may have occurred since implementation commenced? There is a need
for the EC to address the continuing hesitation to update the originally designed Log frame of
the Financing Agreement in subsequent working documents, such as Programme Estimates.
- No specific format
57
The on-line response sheet
58
Responsibility for reporting and follow up
EC X X X
DFID X X
US-DOS X X
USAID X X X X
WB X X X
SIDA X
BTC X X
GTZ X X X X
59
AECID X
LuxD X X
AFD X
NL-MFA X X X
Source: Particip ROM Coordination reports
• Nonexistence/low quality of indicators; monitoring less precise and objective (EC, DFID UK,
LD, NMFA),
Guidelines
( Based on the Workshop Reviewing M&E Systems and Reporting Practices of EDF/STABEX supported
Programmes and projects in Sudan 19-21st December 2010, Findings and Recommendations)
60
Difference between implementation monitoring and results monitoring?
The manual is concerned about results monitoring and evaluation and its reporting. The Workshop
results showed that most Line ministries are concerned about implementation monitoring and its
status reporting to the hierarchy of Project steering committees. Some study reports on the current
status of foreign aid disbursed and delivered to Sudan for Recovery and Development (2005-2009)
are regularly prepared to show the financial monitoring of programmes and projects.
The following elements are key features in Implementation monitoring of projects and programmes.
The following elements are necessary for results monitoring for a range of interventions and
strategies:
Baseline data to describe the problem or situation before the intervention ( Section 3)
Indicators of outcomes (Section 4)
Data collection on outputs and how and whether they contribute toward achievement of
outcomes
More perceptions of change among stakeholders (Section 3)
Systemic reporting with more qualitative and quantitative information on the progress toward
outcomes
The reports have been described for the ROM and the evaluation; they are standard EC formats that
can be easily adjusted to Sudan administrative requirements. For Internal monitoring is the Internal
monitoring Sheets as used by the EUD (See Sample).
For implementation monitoring and reporting to hierarchy and Steering committees are the following
reports used. There is no standard format available for all Sudan administration and standardisation of
formats have been recommended by the participants of the workshop in order to harmonise the M&E
and reporting and practices.
All reports have been simplified focusing on outputs and outcome, more results based than
activity focused. The reports are less narrative and more quantitative in approach.
There are quarterly and annual reports addressed to the Steering Committee and donors. Later
the quarterly report became 6 monthly reports.
Sometimes Status reports or Overall reports are asked by the Steering Committee.
61
Quarterly Progress reports or six monthly reports are prepared by the PSU. They give details
about project implementation and achievements, they inform also about Project performance
and level of satisfaction of project stakeholders.
Annual reports are prepared by the PSU for the Steering Committee and gives information on
the financial monitoring of the disbursements of the funds and their use; a review is
undertaken yearly to assess progress made with respect to the annual work plan.
Status: Ongoing
Project Site:
Work Plan: Annual work plan and budget for MTI 2002
62
Annual work plan and budget for MoA 2002
Progress Reports: Minutes of the RED PAC meeting held on 26 June 2002
Key Documents
63
Logical Framework: Log frame
RED Marketing - Consolidated Progress Report and Future Action
RED Marketing – Japan Market Report for Handicraft Items & Art
work
RED Marketing – Research report for Bhutan Crafts in the UK
Media reports:
64
Section 7: Country Programme Evaluation
Bilateral and Multilateral Country Programmes
Country Programme Evaluation does have the same approach as the other types of evaluations. The
systematic approach, methods and tools, judgment criteria and evaluation questions are used in the
same way, as described in the other chapters.
The difference is the scope and the use of rather strategic objectives. Also the systematic impact of
synergies between multilateral and bilateral cooperation are important. Country Programmes designed
and implemented by different development agencies in a country can have synergies but also adverse
effects.
The key comparative advantages of the multilateral institutions include their unique cross country
exposure, their close relationship with government institutions in partner countries and their role in
the international harmonisation process. Strengths of the bilateral cooperation include strong
relationships with civil society and private sector actors and flexibility to introduce innovative
approaches. Synergies between these two forms of cooperation contribute to a better achievement of
development objectives through mutual learning and exchange of experiences. Humanitarian Aid
banks a lot on bilateral and multilateral activities and their synergies and is therefore included.
A recent Country Programme Evaluation was performed by DFID for Sudan in March 2010. (see
http://www.dfid.gov.uk/Media-Room/Publications/Evaluation-studies/ )
65
Government
(Country X)
Shareholding, provision of
financial resources, etc.
Swiss cooperation Multilateral institutions
Recent international developments, including the Monterrey consensus and the Paris Declaration,
emphasise the partner government leadership role as well as harmonisation and collaboration among
development partners as a condition for increased aid effectiveness. Harmonisation and coordination
at country level are characterised by interaction between a multitude of actors, including other
bilateral donors, civil society and private sector actors, with the government institutions leading the
process. Within these multiple interactions, the relation between multilateral and bilateral cooperation
has unique characteristics that require particular attention.
Allocated
Type Sectors amount
(M €)
66
Rehabilitation and rural development 52
Health 25
- Good governance,
- Support to the National Authorising Officer
(NAO),
- Technical facility of cooperation,
- Support to non-state actors,
- Support to the Economic Partnership Agreement
Envelope ‘B’ Funds to cover emergencies and unforeseen needs 24.1
Total 212.1
Source: Republic of Burundi – EU: Country strategy and indicative national programme, 2008-2013.
The starting point of the evaluation should be the analysis of the underlying logic of the EC's
development cooperation. A multi-level analysis will be needed, as EC’s objectives have evolved
over time, responding to the changing environment and needs, and also because the logic is set out in
a variety of documents, thus needs to be collated into a coherent framework. In its simplest form, a
logic model describes the theory and design of an intervention, how the intervention's activities and
outputs derive from objectives and influence stakeholders and/or beneficiaries leading to the
achievement of the intended outcomes in the short-, medium- and longer-term. In the logic model the
key links from the activity to the long-term objectives are set out, illustrating a "results chain", thus
identifying key relationships and enabling the identification of performance indicators along the
chain.
The following diagram represents the connections between the hierarchy of objectives and the chain
of results.
67
68
Intervention logic, 9th EDF 2003-2007, faithful effects diagram Intervention logic, 9 th EDF 2003-
2007, faithful effects diagram
The objective of the evaluation is to assess the performance of the EC’s past and current assistance to
Burundi, to assess the achievements of the EC’s development cooperation. The evaluation will
identify conclusions and key lessons, which can be drawn from past operations. The evaluation will
provide the EC’s policy makers and task managers with recommendations useful for the
implementation of the ongoing CSP and the Annual Programmes, as well as for future programming.
69
The evaluation examines the relevance and coherence between programming and implementation of
the EC National Indicative Programmes (NIP) for 2001-2007 and 2008-2013. The evaluation covers
the consistency check of the NIP with Burundi’s Poverty Reduction Strategy.
2 Combating To what extent and in According to the CSP 2003-2007, between 1990
poverty which way has the EC and 2001 the poverty rate in Burundi increased
contribution helped from 40% to 69%. One of the overarching aims
Burundi to make of the EC’s development cooperation is to fight
progress towards against poverty. Funds were allocated to a
poverty reduction? broad range of sectors, and various instruments
were used to support Burundian people in need.
3 Coordination To what extent does the EC is one of the largest donors in Burundi,
with donors EC coordinate with while several member states provide bilateral
other donors to ensure support to the country. Coordination and
better delivery of harmonisation of the aid can enhance efficiency,
services? and promote synergies among donor actions.
4 Relevance of Was it relevant that EC Burundi was seriously hit by the civil war, and
EC support to supported post-crisis there is still much to be done in the field of
post-crisis rehabilitation? rehabilitation. Programmes related to rural
rehabilitation development, one of the focal sectors, targeted
the improvement of the social, physical
70
# Topic Question Approach
conditions of the rural areas.
7 General Is GBS a more effective General budget Support has an increasing share
Budget and efficient aid in 10th EDF compared to the previous EDF
Support instrument than SBS cycles. Main conclusions, lessons about the
programme support? use, advantages, and disadvantages of GBS can
be made.
9 Healthcare How and in what way Burundi struggle to improve its health system,
has the EC assistance while the population is growing rapidly and
contributed to tackle state resources are very limited. Health became
health challenges of the a focal sector in the 10th EDF. Various health
country? sub-sectors can be analysed under this question.
71
# Topic Question Approach
10 Macro- What are the main It can be examined if EC aid has any effect on
economic outcomes of the EC debt management, inflation, balance of
support assistance at a payments, etc.
macroeconomic level?
Coordination To what extent does the EC has added value in policy Measuring of how
with donors EC coordinate with dialogue in areas that have a much the specific
other donors to ensure strong rationale in terms of coordination instances
better delivery of poverty reduction were connected to the
services? paths towards poverty
reduction
Coherence of To what extent did the Objectives of CSP 2003- Degree of alignment of
development design of the EC aid 2007 and 2008-2013 are CSP objectives with
strategy strategy take due coherent with Burundian needs and priorities of
72
Topic Question Judgment criterion Indicators
account of the strategic priorities Burundi as stated
Burundian strategic
priorities?
Sample of ToR for EC Country programmes evaluation (EC format for Reporting) see at
ec.europa.eu/europeaid/evaluation/methodology/
73
Glossary of Monitoring and Evaluation Terms
Accountability
Obligation for a manager of resources to demonstrate that work has been conducted in compliance
with the established plans, budgets, rules and standards and to report fairly and accurately on
performance results. It includes responsibility for the justification of expenditures, decisions or results
of the discharge of authority and official duties, including duties delegated to a subordinate unit or
individual. The effective discharge of accountability is predicated on clearly defined responsibilities,
performance expectations, limits of authority, and clarity on how the exercise of responsibility and
authority will be monitored and assessed. One of the main functions of monitoring and evaluation is
to contribute to strengthening accountability by providing objective information on the veracity of a
manager’s reporting.
Activity
Appraisal
An overall assessment of the relevance, feasibility and potential sustainability of a project or other
operational exercise. It is an assessment of the overall soundness of the project and a justification for
its implementation. Criteria commonly include relevance and sustainability. An appraisal may also
relate to the examination of opinions as part of the process for selecting which project to fund. The
purpose of appraisal is to enable decision-makers to decide whether the activity is in accordance with
mandates and represents an appropriate use of resources.
Assumption
Hypothesis about risks, influences, external factors or conditions that could affect the progress or
success of a project or a programme. Assumptions highlight external factors, which are important for
the success of project or programme, but are largely or completely beyond the control of management.
Audit
An exercise to determine if there is an adequate and effective system of internal controls for providing
reasonable assurance with respect to:
Integrity of financial and operational information; compliance with regulations, rules, policies
and procedures in all operations; and safeguarding of assets;
The economic and efficient use of resources in operations and identifying opportunities for
improvement in a dynamic and changing environment; and
Baseline
Data that describe the situation to be addressed by a project, programme or subprogramme and that
serve as the starting point for measuring performance. A baseline study would be the analysis
describing the situation prior to the commencement of the project or programme or the situation
74
following initial commencement of the project or programme to serve as a basis of comparison and
progress for future analyses. It is used to determine the accomplishments/results and serves as an
important reference for evaluation.
Benchmark
Reference point or standard against which performance or achievement can be assessed. A benchmark
often refers to an intermediate target to measure progress within a given period as well as to the
performance of other comparable organisational entities.
Beneficiary
The individual, group, or organisation, whether targeted or not, that benefit, directly or indirectly,
from the implementation of a project, programme or output.
Best practice
Planning, organisation, managerial and/or operational practices that have proven successful in
particular circumstances and which can have both specific and/universal applicability. Best practices
are used to demonstrate what works most effectively and to accumulate and apply knowledge about
how and why they work in different situations and contexts.
Bias
Anything that produces systematic error in an evaluation finding. Bias may result in over- or under-
estimating the object of evaluation or assessment.
Case study
The examination of the characteristics of a single case (such as an individual, an event, a programme
or some other discrete entity). A sample of multiple cases can also be examined to look for
commonalities and to identify patterns. Case studies are often used to gather qualitative information in
support of findings obtained through quantitative methods.
Causal relationship
Conclusions
75
Content analysis
Cost-benefit analysis
A specialised analysis which converts all costs and benefits to common monetary terms and then
assesses the ratio of results to inputs against other alternatives or against some established criteria of
cost-benefit performance. It often involves the comparison of investment and operating costs with the
direct and indirect benefits generated by the investment in a project or programme.
Cost-effectiveness
Comparison of the relative costs of achieving a given result or output by different means. It focuses
on the relation between the costs (inputs) and results produced by a project or programme. A
project/programme is more cost effective when it achieves its results at the lowest possible cost
compared with alternative projects with the same intended results.
Criteria
The standards used to determine whether or not a project or programme meets expectations.
The mode of collection to be used when gathering information and data on a given indicator of
achievement or evaluation. Collection methods include the review of records, surveys, interviews, or
content analysis.
Data source
The origin of the data or information collected. Data sources may include informal and official
records, individuals, documents, etc.
Description of results
Succinct statement based on the data collected on the performance measures at the indicator of
achievement level. It interprets and articulates such data in a results oriented language..
Effect
Intended or unintended change caused directly or indirectly by the delivery of an output, project or
programme.
Effectiveness
The extent to which a project or programme attains its objectives, expected accomplishments and
delivers planned outputs.
76
Efficiency
A measure of how well inputs (funds, expertise, time, etc.) are converted into outputs.
Evaluation
A process that seeks to determine as systematically and objectively as possible the relevance,
effectiveness and impact of an ongoing or completed project, programme or policy in the light of its
objectives and accomplishments. It encompasses their design, implementation and results with the
view to providing information that is credible and useful, enabling the incorporation of lessons
learned into both executive and legislative decision-making process. Evaluation is often undertaken
selectively to answer specific questions to guide decision-makers and/or programme managers, and to
provide information on whether underlying theories and assumptions used in programme development
were valid, what worked and what did not work and why.
Evaluation scope
A framework that establishes the focus of an evaluation in terms of questions to address, the issues to
be covered, and defines what will be analysed and what will not be analysed. The scope defines the
parameters of the evaluation and is presented in the “Terms of Reference”.
Evaluation team
Group of specialists responsible for the planning and conduct of an evaluation. An evaluation team
produces the evaluation report.
Evaluator
An individual involved in all stages of the evaluation process, from defining the Terms of Reference
and collecting and analysing data to developing findings and making recommendations. The evaluator
may also be involved in taking corrective action or making improvements.
Evidence
Ex post evaluation
An assessment of the relevance, effectiveness and impact of a project ot programme that is carried out
some time after its completion. It may be undertaken directly after or long after completion. The
intention is to identify the factors of success or failure, to assess the sustainability of results and
impacts, and to draw conclusions that may inform other projects and programmes.
External evaluation
77
control or influence by those responsible for the design and implementation of the project and
programmes.
Focus group
Formative evaluation
Goal
Impact
The overall effect of accomplishing specific results. In some situations it comprises changes, whether
planned or unplanned, positive or negative, direct or indirect, primary and secondary that a Project or
programme helped to bring about. In others, it could also connote the maintenance of a current
condition, assuming that that condition is favourable. Impact is the longer-term or ultimate effect
attributable to a project or programme , in contrast with an expected accomplishment and output,
which are geared to the biennial timeframe.
Indicator
A measure, preferably numerical, of a variable that provides a reasonably simple and reliable basis for
assessing achievement, change or performance. A unit of information measured over time that can
help show changes in a specific condition.
Indicator of achievement
Used to measure the extent to which expected accomplishments have been achieved. Indicators
correspond to the expected accomplishment for which they are used to measure performance. One
expected accomplishment can have multiple indicators.
Indirect effect
Internal evaluation
Evaluation that is managed and/or conducted by entities within the programmes being evaluated.
There are two types of internal evaluation, namely:
78
(2) Discretionary Internal Evaluation (Self-evaluation)
Input
Personnel, finance, equipment, knowledge, information and other resources necessary for producing
the planned outputs and achieving expected accomplishments.
Lesson learned
Generalisation derived from evaluation experiences with projects, programmes or policies that is
applicable to a generic situation rather than to a specific circumstance and has the potential to improve
future actions. A lesson learned summarises knowledge at a point in time, while learning is an
ongoing process.
Logical framework
Management tool (also known as a Log frame) used to identify strategic elements of a project or
programme (objective, expected accomplishments, indicators of achievement, outputs and inputs) and
their causal relationships, as well as the assumptions and external factors that may influence success
and failure. It facilitates planning, implementation, monitoring and evaluation of a project or
programme .
Methodology
A set of analytical methods and techniques appropriate for evaluation of the particular activity. It
could also be aimed at collecting the best possible evidence needed to answer the evaluation issues
and analytic questions.
Monitoring
A periodic assessment by programme managers and by, of the progress in achieving the expected
accomplishments and delivering the final outputs in comparison with the commitments set out in the
programme budget.
The combination of monitoring and evaluation together provide the knowledge required for effective
project and programme management and for reporting and accountability responsibilities.
Objective
Description of an overall desired achievement involving a process of change and aimed at meeting
certain needs of identified end-users within a given period of time. A good objective meets the criteria
of being impact oriented, measurable, time limited, specific and practical. The objective is set at the
next higher level than the expected accomplishments.
Outcome
79
Output
Participatory evaluation
A broad term for the involvement of various stakeholders in evaluation. It involves the collective
examination and assessment of a project or subprogramme by the stakeholders (programme managers
and staff included) and solicits views of end-users and beneficiaries. Participatory evaluations involve
reflective, action-oriented assessments of performance and accomplishment which yield lessons
learned and instructive practices.
Performance
The degree to which a project or programme delivers results in accordance with stated objectives,
timely and effectively as assessed by specific criteria and standards.
Performance assessment
Performance measurement
A system for the collection, interpretation of, and reporting for the purpose of objectively measuring
how well projects or programmes contribute to the achievement of expected accomplishments and
objectives and deliver outputs.
Performance monitoring
A continuous process of collecting and analysing data to compare how well a project, programme or
policy is being implemented against expected results.
Project
Planned activity or a set of planned, interrelated activities designed to achieve certain specific
objectives within a given budget, organisational structure and specified time period.
A tool for understanding the tasks and management functions to be performed in the course of a
project or programme’s lifetime. This commonly includes the stages of identification, preparation,
appraisal, implementation/supervision, monitoring, evaluation, completion and lesson learning.
80
Project evaluation
Evaluation of an individual project designed to achieve specific objectives within specified resources,
in an adopted time span and following an established plan of action, often within the framework of a
broader programme. The basis of evaluation should be built into the project document.
Project document
A formal document covering a project, which sets out, inter alia, the needs, results, outputs, activities,
work plan, budget, pertinent background, supporting data and any special arrangements applicable to
the execution of the project in question. Once a project document is approved by signature, the project
represents a commitment of resources.
Qualitative data
Information that is not easily captured in numerical form (although qualitative data can be quantified).
Qualitative data typically consist of words and normally describe people's opinions, knowledge,
attitudes or behaviours.
Quantitative data
Information measured or measurable by, or concerned with, quantity and expressed in numerical
form. Quantitative data typically consists of numbers.
Recommendation
Proposal for action to be taken to enhance the design, allocation of resources, effectiveness, quality, or
efficiency of a project or a programme. Recommendations should be substantiated by evaluation
findings, linked to conclusions and include the parties responsible for implementing the recommended
actions.
Relevance
The extent to which an activity, expected accomplishment or strategy is pertinent or significant for
achieving the related objective and the extent to which the objective is significant to the problem
addressed.
Result
A management strategy by which the managerensures that its processes, outputs and services
contribute to the achievement of clearly stated expected accomplishments and objectives. It is focused
on achieving results and improving performance, integrating lessons learned into management
decisions and monitoring of and reporting on performance.
81
Self-monitoring
Ongoing assessment by the head of a department or office of the progress in achieving the expected
accomplishments and delivery of outputs.
Stakeholder
Agencies, organisations, groups or individuals who have a direct or indirect role and interest in the
objectives and implementation of a project or programme and its evaluation. In participatory
evaluation, stakeholders assume an increased role in the evaluation process as question-makers,
evaluation planners, data gatherers and problem solvers.
Summative evaluation
A study conducted by independent evaluators at the end of a project or programme to measure extent
to which anticipated results were achieved; ascertain the effectiveness and relevance of approaches
and strategies; indicate early signs of impact; and recommend what interventions to promote or
abandon. Summative or Terminal evaluation is intended to provide information about the merit and
worth of the project or programme.
Sustainability
The extent to which the impact of the project or programme will last after its termination; the
probability of continued long-term benefits.
Target
A specified objective that indicates the number, timing and location of what is to be achieved.
Target group
The main beneficiaries of a project or programme that are expected to gain from the results of that
project or programme . They are closely related to its impact and relevance.
Terms of Reference
Written document presenting the purpose and scope of the evaluation or inspection, the methods to be
used, issues to be addressed, the resources, schedule, and reporting requirements.
Triangulation
Validation
The process of cross-checking to ensure that the data obtained from one monitoring and evaluation
method are confirmed by the data obtained from a different method.
82
Work plan
A detailed document stating outputs to be delivered and activities to be carried out in a given time
period, how the activities will be carried out, and what progress towards expected accomplishments
will be achieved. It contains timeframes and responsibilities and is used as a monitoring and
accountability tool to ensure the effective implementation of the programme. The work plan is
designed according to the logical framework.
83
Bibliography:
Slides and other training materials in this manual have been taken from training manuals on
Monitoring and Evaluation, prepared for the EC services by Demos International, Participe
GmbH and Price Waterhouse Coopers Advisory Belgium.
Guidelines for systems of monitoring and evaluation for the Human Resources Initiative
EQUAL in the period 2000-2006 » DG Employment and Social affairs, July 2000
84
85
Sites of interest within the Commission
86