0% found this document useful (0 votes)
9 views

Software Project Management Lecture Seven

The document discusses several methods for estimating the time and costs of software projects, including estimating by analogy which uses data from similar past projects, function point analysis which quantifies the functional size of a project, and algorithmic models like COCOMO which use regression analysis of historical data to estimate effort based on variables like lines of code. It also covers challenges with estimation like changing requirements and technologies as well as problems that can occur if estimates are too high or too low.

Uploaded by

Milkiyas Jc
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Software Project Management Lecture Seven

The document discusses several methods for estimating the time and costs of software projects, including estimating by analogy which uses data from similar past projects, function point analysis which quantifies the functional size of a project, and algorithmic models like COCOMO which use regression analysis of historical data to estimate effort based on variables like lines of code. It also covers challenges with estimation like changing requirements and technologies as well as problems that can occur if estimates are too high or too low.

Uploaded by

Milkiyas Jc
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Software Project Management

Lecture Seven
Contents

• Project Estimation
• ESTIMATING BY ANALOGY
• Albrecht Function Point Analysis
• COCOMO COST ESTIMATION MODEL
Project Estimation

• Project is estimated to define the budget and to ‘refine’ the


product to realize the budget.
• This used usually the responsibility of the Project manager.
• One successful project is that the system is delivered ‘on time and
within budget and with the required quality’, which implies that
targets are set and the project leader tries to meet those targets.
• To achieve this Realistic estimates are therefore crucial to the
project leader.
• Estimating the efforts required to implement software is
notoriously difficult.
• Some of the difficulties of estimating are inherent in the very
nature of the software, especially its complexity and invisibility.
• In addition the intensely human activities that make up system
development cannot be treated in a purely mechanistic way.
Other difficulties include:

• Novel application of the software: When the system to be created is


similar to one constructed previously but for a different customer or in
different location. The estimation of such project can be therefore be
based on the previous experienced.
• Changing technologies: Different programmer may develop project in
different programming environment e.g. Cobol, Oracle
• Lack of homogeneity of project experience: Effective estimating should be
based o the information about how past projects have performed. Many of
the organization do not have past data available to staff.
• Problems Related to Size Estimation
– Nature of software
– Novel application of software
– Fast changing technology
– Lack of homogeneity of project experience
– Subjective nature of estimation
– Political implication with the organization: Different group in an organization have
different objectives.
Problems with over-and under-estimates
• Over-estimates might cause the project to take
longer than it would be.
– Parkinson’s Law: ‘Work expands to fill the time
available’, which implies that given an easy target the
staff will work less hard.
– Brook’s Law: The effort required to implement a
project will go up disproportionately with the
number of staff assigned to the project.
• Under-estimation project might not be
completed on time or to a cost. The danger with
the under-estimates is the effect on quality.
• There are various causes of poor and inaccurate
estimation, which are listed below.
– Imprecise and drifting requirements
– New software projects are nearly always different form the last.
– Software practitioners don’t collect enough information about
past projects.
– Estimates are forced to match the resources available.
– Good effort estimation is needed before the projects starts, but
we only have partial information about the activities required
and the resources needed.
– After the project is completed exact information about
activities and effort required is available, but then it is not
interesting
• From time to time, you need to revise your estimation
based on the current status of the project
• The basis of software estimating
– The need of historical data: Nearly all the estimating
method need information about how project have
been implemented in the past.
– Measure of work: The time taken to write a program
may vary according to the competence or experience
of the programmer. Implementation times might also
vary because the environmental factors such as
software tools available.
– Complexity: Two programs with same KLOC will not
necessarily take the same time to write, even if done
by the same developer in the same environment.
Because of this, SOLC estimates have to be modified
to take complexity in to account.
• Wherever you are estimating a project remember estimation
techniques
– Expert judgments
• Ask the knowledgeable experts
– Estimation by analogy
• Use the data of a similar and completed project
– Parkinson
• Which identifies the staff effort available to do a project and uses that as the
‘estimates’
– Pricing to win
• Use the price that is low enough to win the contract
– Top-down
• An overall estimate is determined and then broken down into each component task
– Bottom-up
• The estimates of each component task are aggregate to form the overall estimate
– Algorithmic model
• Estimation is based on the characteristics of the product and the development
environment.
ESTIMATING BY ANALOGY

• The use of analogy is also called case-based reasoning.


• Analogy Costing Method
– This involves extrapolating actual data from previously completed projects of a similar
nature to the proposed project in order to make estimates for the proposed project.
– Estimating by analogy can be done either at the system level or the component level.
• Expert Judgment Method
– Expert judgment is a method of estimation, which is used to complement other
estimation methods.
– It involves using the experience and understanding of an expert on a proposed
project.
– It is often used with the analogy method to aid in the identification of differences
between past projects and the proposed project and to estimate the effects of these
differences.
• Algorithmic (Parametric) Method
– Algorithmic methods involve the use of equations based on historical data applied to
measures such as size (LOC) and functionality (FP) to yield software estimates.
– These methods are repeatable and customizable, however, they can be inconsistent in
the absence of organization specific historical data
Algorithmic Cost Estimation Models
• There are a number of liner models for estimating
effort (usually in Man months).
• These are based on regression analysis on data
collected from past software projects. These models
have the general form:
E=A+B*(ev)^C
• Where E is effort in person-months, A,B and C are
empirically derived constants and ev is the
estimation variable (either LOC or FP).
• There is a perceived problem with these models that
they are only suited to certain classes of software.
• Some of the more common of these models
are (COCOMO 81):
• Estimation by Analogy might be a good approach where you have
information about some previous projects but not enough to
draw generalized conclusions about what variables might make
good size parameters.
• The estimator seeks out projects that have been completed
(source cases) and that have similar characteristics to the new
project (the target case).
• The effort that has been recorded for the matching source case
can then be used as a base estimate for the target.
• The estimator should then try to identify any differences between
the target and the source and make adjustments to the base
estimate for the new project
• A problem here is how you actually identify the similarities and
difference between the different systems.
• One software application that has been developed to do this is
ANGEL.
• This identifies the source case that is nearest the target by measuring
the Euclidean distance between cases.
• The source case that is at the shortest Euclidean distance from the
target is deemed to be the closest match.
• The Euclidean distance calculated:
distance = square-root of ((target_parameter1 - source-
parameter1)^2 + ... + (target_parametern – source_parameter n)^2)
Example 1
• Say that the cases are being matched on the basis of two
parameters, the number of inputs to and the number of outputs
from the system to be built. The new project is known to require 7
inputs and 15 outputs. One of the past cases, Project A, has 8 inputs
and 17 outputs.
• The Euclidean distance between the source and the target is
therefore
• the square-root of ((7-8)^2+(17-15)^2), that is 2.24.
Albrecht Function Point Analysis
• Albrecht was investigating programming productivity and needed
some way to quantify the functional size of programs independently
of the programming languages in which they had been coded.
• He developed the idea of function points (FPs).
• The basis of function point analysis is that computer-based
information systems comprise five major components, or external
user types in Albrecht’s terminology, that are of benefit to the users:
– External input types are input transactions that update internal computer
files.
– External output types are transactions where data is output to the user. These
can be printed reports
– Logical internal file types are the standing files used by the system. It refers to
a group of data that is usually accessed together. It might be made up of one
or more record types.
– External interface file types allow for output and input that might pass to and
from other computer applications.
– External inquiry types are transactions initiated by the user that provide
information but do not update the internal files.
• The analyst has to identify each instance of each external
user type in the project system. Each component is then
classified as having high, average or low complexity.
• The counts of each external user type in each complexity
band are multiplied by specified weights to get FP scores,
• And then summed to obtain an overall FP count, which
indicates the information processing size.
• One problem with FPs as originally defined by Albrecht
was that question of whether the external user type was
high, low or average complexity was rather subjective.
• The International FP User Group (IFPUG) has now
promulgated rules on how this is to be judged.
• For example, in the case of logical internal files, and
external interface files, the boundaries.
Example 2
• A logical internal file might contain data about purchase
orders. These purchase orders might be organized into two
separate record types:
• The main PURCHASEORDER details, namely purchase order
number, supplier reference and purchase order date and then
• Details for each PURCHASEORDERITEM specified in the order,
namely the product code, the unit price and number ordered.
• The number of record types for that file would therefore be 2
and the number of data types would be 6.
• According, to Table On the Previous slide Table 2, this file type
would be rated as ‘low’
• This would mean that according to Table 1 on the previous
table, the FP count would be seven for this file.
COCOMO cost estimation model
• The COCOMO cost estimation model is used by thousands
of software project managers, and is based on a study of
hundreds of software projects.
• Unlike other cost estimation models, COCOMO is an open
model
• COCOMO estimates are more objective and repeatable than
estimates made by methods relying on proprietary models
• COCOMO can be calibrated to reflect your software
development environment, and to produce more accurate
estimates
• Costar is a faithful implementation of the COCOMO model
that is easy to use on small projects, and yet powerful
enough to plan and control large projects.
• Source Lines of Code
• The most fundamental calculation in the COCOMO model
is the use of the Effort Equation to estimate the number
of Person-Months required to develop a project.
• The COCOMO calculations are based on your estimates of
a project’s size in Source Lines of Code (SLOC). SLOC is
defined such that:
– Only Source lines that are DELIVERED as part of the product are
included — test drivers and other support software is excluded
– SOURCE lines are created by the project staff — code created
by applications generators is excluded
– One SLOC is one logical line of code
– Declarations are counted as SLOC
– Comments are not counted as SLOC
• The Scale Drivers
• In the COCOMO II model, some of the most
important factors contributing to a project’s
duration and cost are the Scale Drivers.
• You set the 5 Scale Drivers to describe your
project; these Scale Drivers determine the
exponent used in the Effort Equation.
• The 5 Scale Drivers are:
– Precedentedness
– Development Flexibility
– Architecture / Risk Resolution
– Team Cohesion
– Process Maturity
• Cost Drivers
• COCOMO II has 17 cost drivers – you assess your
project, development environment, and team to set
each cost driver.
• The cost drivers are multiplicative factors that
determine the effort required to complete your
software project.
• For example, if your project will develop software
that controls an airplane’s flight, you would set the
Required Software Reliability (RELY) cost driver to
Very High.
• That rating corresponds to more effort than a
typical software project.
COCOMO II Effort Equation
• The COCOMO II model makes its estimates of required effort (measured
in Person-Months – PM) based primarily on your estimate of the
software project’s size (as measured in thousands of SLOC, KSLOC)):
• Effort = 2.94 * EAF * (KSLOC)^E
Where
– EAF Is the Effort Adjustment Factor derived from the Cost Drivers
– E Is an exponent derived from the five Scale Drivers
Example,
• A project with all Nominal Cost Drivers and Scale Drivers would have an
EAF of 1.00 and exponent, E, of 1.0997.
• Assuming that the project is projected to consist of 8,000 source lines of
code,
• COCOMO II estimates that 28.9 Person-Months of effort is required to
complete it:
• Effort = 2.94 * (1.0) * (8)^1.0997 = 28.9 Person-Months
Effort Adjustment Factor

• The Effort Adjustment Factor in the effort equation is


simply the product of the effort multipliers
corresponding to each of the cost drivers for your
project.
• For example, if your project is rated Very High for
Complexity (effort multiplier of 1.34), and Low for
Language & Tools Experience (effort multiplier of 1.09),
and all of the other cost drivers are rated to be Nominal
(effort multiplier of 1.00), the EAF is the product of 1.34
and 1.09.
• Effort Adjustment Factor = EAF = 1.34 * 1.09 = 1.46
• Effort = 2.94 * (1.46) * (8)^1.0997 = 42.3 Person-Months
COCOMO II Schedule Equation
• The COCOMO II schedule equation predicts the number of months
required to complete your software project.
• The duration of a project is based on the effort predicted by the
effort equation:
• Duration = 3.67 * (Effort)^SE
Where
– Effort Is the effort from the COCOMO II effort equation
– SE Is the schedule equation exponent derived from the five Scale Drivers
• Continuing the example, and substituting the exponent of 0.3179
that is calculated from the scale drivers, yields an estimate of just
over a year, and an average staffing of between 3 and 4 people:
• Duration = 3.67 * (42.3)^0.3179 = 12.1 months
• Average staffing = (42.3 Person-Months) / (12.1 Months) =
• 3.5 people

You might also like