0% found this document useful (0 votes)
39 views

Unit- 5 Software Effort Estimation-slides

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Unit- 5 Software Effort Estimation-slides

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Course Title:

BIT401: Software Project Management

Unit 5: Software Effort Estimation


Unit 5: Software Effort Estimation (5 Hrs.)
 Introduction
 Where are Estimates Done
 Problems with Over- and Under-Estimates
 The Basis for Software Estimating
 Software Effort Estimation Techniques
 Bottom-up Estimating
 The Top-down Approach and Parametric Models
 Expert Judgment
 Estimating by Analogy
 Function Points Mark II
 COSMIC Full Function Points
 COCOMO II:
 A Parametric Productivity Model
 Cost Estimation
 Staffing Pattern
 Effect of Schedule Compression
 Caper Jones Estimating Rules of Thumb
Unit 5: Software Effort Estimation
Introduction

 A successful project is one delivered ‘on time, within budget and with the
required quality’ and normally, this targets are set and reasonable which then the
project manager tries to meet.
 Even if a team is highly productive, there are limits to what can be achieved
within certain constraints, such as deadlines based on incorrect initial estimates.
 So, realistic estimates are therefore crucial.
 Some of the difficulties of estimating arise from the complexity and invisibility
of software. Also, the intensely human activities (deeply human aspects) which
make up system development cannot be treated in a purely mechanistic way.

Unit 5: Software Effort Estimation


Introduction
Other difficulties include:
Subjective nature of estimating - For example, some research shows that people
tend to underestimate the difficulty of small tasks and over-estimate that of large
ones.
Political implications - Different groups within an organization have different
objectives. The Information Systems Development managers may want to generate
work and will press estimators to reduce cost estimates to encourage higher
management to approve projects. To avoid these ‘political’ influences, one suggestion
is that estimates be produced by a specialist estimating group, independent of the
users and the project team. Not all agree with this, as developers will be more
committed to targets they themselves have set.
Changing technology - Where technologies change rapidly, it is difficult to use the
experience of previous projects on new ones.
Lack of homogeneity of project experience - Even where technologies have not
changed, knowledge about typical task durations may not be easily transferred from
one project to another because of other differences between projects.
Unit 5: Software Effort Estimation
Where are Estimates Done?

Estimates are carried out at various


stages of a software project for a variety
of reasons.
1. Strategic planning - Project portfolio
management involves estimating the
costs and benefits of new applications in
order to allocate priorities. Such
estimates may also influence the scale of
development staff recruitment.
2. Feasibility study - This confirms
that the benefits of the potential system
will justify the costs.
Software estimation takes place in Steps 3 and 5
in particular

Unit 5: Software Effort Estimation


Where are Estimates Done?
3. System specification – During system development methodologies, users’
requirements and the design which shows how those requirements are to be
fulfilled. The effort needed to implement different design proposals will need
to be estimated. Estimates at the design stage will also confirm that the
feasibility study is still valid.
4. Evaluation of suppliers’ proposals - Consider putting development out to
tender. Potential contractors would scrutinize the system specification and
produce estimates as the basis of their bids. Project own estimates could
question a proposal which seems too low in order to ensure that the proposer
has properly understood the requirements. The cost of bids could also be
compared to in-house development.
5. Project planning - As the planning and implementation of the project
becomes more detailed, more estimates of smaller work components will be
made. These will confirm earlier broad-brush estimates, and will support more
detailed planning, especially staff allocations.
Unit 5: Software Effort Estimation
Where are Estimates Done?
As the project proceeds, so the accuracy of the estimates should improve as
knowledge about the project increases.
At the beginning of the project, the user requirement is of paramount
importance and premature consideration of the possible physical
implementation is discouraged. However, in order to produce an estimate,
there will need to be speculation about the eventual shape of the application.

Unit 5: Software Effort Estimation


Where are Estimates Done?
 To set estimating into the context of the
Step Wise framework, re-estimating
could take place at almost any step, but
specific provision is made for the
production of a relatively high-level
estimate at Step-3:Analyse project
characteristics, and for each individual
activity in Step-5.
 As Steps 5-8 are repeated at
progressively lower levels, so estimates
will be done at a finer degree of detail.
 We will see later in this chapter,
different methods of estimating are
needed at these different planning steps.
Unit 5: Software Effort Estimation
Problems with Over- and Under-Estimates:
 Motivation and Morale - The importance of setting realistic and achievable targets
to maintain high levels of motivation and morale within an organization. Unrealistic
expectations and repeated failures can have a detrimental impact on employee
engagement and satisfaction.
 Research has found that motivation and morale are enhanced where targets are
achievable. If, over time, staff become aware that the targets set are unattainable and
that projects routinely miss targets, motivation is reduced. People like to think of
themselves as winners and there is a general tendency to put success down to our
own efforts and blame failure on the organization.
A project leader will need to be aware that an over-estimate may cause the project to take
longer than it would otherwise. This can be explained by the application of two ‘laws’.
Parkinson’s Law - “Work expands to fill the time available” - that is, given an easy target staff will
work less hard.
Brooks’ Law – “Putting more people on a late job makes it later” - If there is an over estimate of the
effort required, this could lead to more staff being allocated than needed and managerial overheads being
increased. As the project team grows in size, so will the effort that has to go into management,
coordination and communication.

Unit 5: Software Effort Estimation


Problems with Over- and Under-Estimates:
Some have suggested that while the under-estimated project might not be completed on
time or to cost, it might still be implemented in a shorter time than a project with a more
generous estimate. A project leader will need to be aware that an over-estimate may cause
the project to take longer than it would otherwise.

Under-estimating tasks can compromise quality as inexperienced staff may rush to meet
deadlines, resulting in substandard work.
This aligns with Weinberg's zeroth law of reliability, suggesting that a system without a
reliability requirement can prioritize other objectives. The impact of substandard work
often surfaces during testing phases, leading to challenging control issues and potential
project delays due to extensive rework.

An estimate is not really a prediction, it is a management goal. Boehm proposes that if the
actual cost is within 20% of the estimated cost, a skilled manager can influence the
outcome to align with the estimate, turning it into a self-fulfilling prophecy(believe).
Unit 5: Software Effort Estimation

Basis for Software Estimating


(Methods and factors considered when estimating software projects)

1. Historical data
2. Parameters to be estimated
3. Measure of work

Unit 5: Software Effort Estimation


The Basis for Software Estimating:
Software estimating is the process of predicting the effort, time, and resources
required to complete a software development project. Accurate estimation is
crucial for project planning, budgeting, and resource allocation. There are several
methods and factors considered when estimating software projects.
Some key aspects:
1. Historical data - Historical data provides a valuable foundation for predicting
the resources, time, and costs required for future software development
projects.
2. Parameters to be estimated - The project manager needs to estimate two
project parameters effort and duration in person-months for carrying out
project planning.
3. Measure of work - Measure of work involved in completing a project is also
called the size of the project i.e. Source Lines of Code (SLOC), Function
Points (FP), Use Case Points (UCP), Story Points (for Agile projects)
Unit 5: Software Effort Estimation
The Basis for Software Estimating:
1. The need for historical data - Estimating methods often rely on past project data,
but caution is required due to potential differences in factors like programming
languages and staff experience. In the absence of past project information, external
datasets like the International Software Benchmarking Standards Group (ISBSG) can be
accessed, providing data from 4800 projects for analysis. Historical data provides a
valuable foundation for predicting the resources, time, and costs required for future
software development projects.
2. Parameters to be estimated - In project management, the estimation of effort and
duration is crucial for effective project planning. Effort is measured in person-months
(pm), representing the work an individual can typically contribute in a month. This unit
accounts for productivity losses due to holidays, breaks, etc. Duration, typically measured
in months, is a key parameter. Person-month is preferred over person-days or person-
years, aligning with the usual project assignment duration.

Unit 5: Software Effort Estimation


The Basis for Software Estimating:
2. Measure of work - Measure of work involved in completing a project is also
called the size of the project. Work itself can be characterized by cost in
accomplishing the project and the time over which it is to be completed.
Direct calculation of cost or time is difficult at the early stages of planning.
The project size is a measure of the problem complexity in terms of the effort
and time required to develop the product. Two metrics are at present
popularly being used to measure size. These are Source Lines of Code
(SLOC) and Function Point (FP).
Unit 5: Software Effort Estimation
The Basis for Software Estimating – Disadvantages of SLOC
The SLOC measure suffers from various types of disadvantages, which are to a great extent
corrected in the FP measure. However, the SLOC measure is intuitively simpler, so it is still
being widely used.

Limitations of SLOC:
No precise definition - Researchers have not been consistent on points like does it include
comment lines or are data declarations to be included?
Difficult to estimate at start of a project - can be accurately computed only after the
development of the software is complete. The SLOC count can only be guessed at the
beginning of a project, often leading to grossly inaccurate estimations.
Only a code measure - effort required for carrying out all the life cycle activities and not just
coding.
Programmer-dependent - problem size can vary widely with the coding style of individual
programmers. This aspect alone renders any LOC-based size and effort estimations inaccurate.
Does not consider code complexity - Two software components with the same SLOC will not
necessarily take the same time to write, even if done by the same programmer in the same
environment. One component might be more complex. Attempts have been made to find
objective measures of complexity,

Unit 5: Software Effort Estimation

Software Effort Estimation Techniques


1. Bottom-up Estimating
2. The Top-down Approach and Parametric Models
3. Expert Judgment
4. Estimating by Analogy
5. Function Points Mark II
6. COSMIC Full Function Points
7. COCOMO II : A Parametric Productivity Model
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
Software effort estimation is the process of predicting the amount of effort, time, and
resources required to develop a software system. Accurate effort estimation is crucial for
project planning, resource allocation, and budgeting.

Various techniques are used for software effort estimation, and they can be broadly
categorized into two types: Expert Judgment Techniques and Algorithmic Techniques.
1. Expert Judgment Techniques:
a. Delphi Technique: Involves a group of experts who anonymously provide their
estimates on a particular task. A facilitator collects, summarizes, and redistributes the
estimates until a consensus is reached.
b. Expert Opinion: Involves seeking opinions and judgments from domain experts,
project managers, and experienced team members. This method is subjective and relies
on the expertise of individuals.
c. Analogy-Based Estimation: Estimates are derived by comparing the current project
with similar past projects. Requires a historical database of completed projects for
reference.

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
Barry Boehm, in his classic work on software effort models, identified the main ways
of deriving estimates of software development effort as:
 Algorithmic models - which use ‘effort drivers’ representing characteristics of the
target system and the implementation environment to predict effort;
 Expert judgement - based on the advice of knowledgeable staff;
 Analogy - where a similar, completed, project is identified and its actual effort is
used as the basis of the estimate;
 Parkinson - where the staff effort available to do a project becomes the ‘estimate’;
 Price to win - where the ‘estimate’ is a figure that seems sufficiently low to win a
contract;
 Top-down - where an overall estimate for the whole project is broken down into the
effort required for component tasks;
 Bottom-up - where component tasks are identified and sized and these individual
estimates are aggregated.

Note: Parkinson" method and "price to win" are not true effort or price prediction methods but rather
tools for defining project scope and determining a competitive pricing strategy. While they may not be
effective for prediction, they offer value as management techniques, as acknowledged by Boehm.
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
Bottom-up Estimating - Bottom-up estimating is an approach to software effort
estimation where the project is broken down into smaller, more manageable
components, and estimates are made for each individual component. These estimates
are then aggregated to arrive at an overall estimate for the entire project.
With a large project, the process of breaking it down into tasks is iterative: each task
is decomposed into its component subtasks and these in turn could be further
analysed. It is suggested that this is repeated until you get tasks an individual could do
in a week or two.
It is really a separate process of producing a Work Breakdown Schedule (WBS).
Break down the project into smaller, more manageable tasks or work packages.
Create a hierarchical structure known as the Work Breakdown Structure (WBS),
which organizes the tasks in a logical order.
The bottom-up approach in project planning is most effective during later stages
when detailed information is available. Using it earlier requires making assumptions
about the final system and project methods, which may lead to inaccuracies.

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
Bottom-up Estimating - Step-by-step guide to bottom-up estimating for software effort
estimation (A procedural code-oriented approach):
(a) Envisage the number and type of software modules in the final system - Most
information systems, for example, are built from a small set of system operations, e.g.
Insert, Amend, Update, Display, Delete, Print. The same principle should equally apply to
embedded systems, but with a different set of primitive functions.
(b) Estimate the SLOC of each identified module - The approach involves creating a
program structure diagram to outline identified procedures and estimating the number of
instructions needed for each. This estimation is guided by visualizing the implementation
of each procedure. Existing programs with similar functionality may be referenced to aid
in gauging the likely number of instructions in the program.
(c) Estimate the work content, taking into account complexity and technical difficulty -
In software estimation, Source Lines of Code (SLOC) are multiplied by a subjective
factor to account for complexity and technical difficulty. This factor is determined based
on the estimator's subjective judgment, considering factors like meeting specific
performance targets, which can significantly impact programming effort.
(d) Calculate the work-days effort - Historical data can be used to provide ratios to convert
weighted SLOC to effort. i.e. effort = (system size) X (productivity rate)
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
The Top-down Approach and Parametric Models -
The top-down approach is normally associated with parametric (or algorithmic) models.
Project effort relates mainly to variables associated with characteristics of the final
system. A parametric model will normally have one or more formulae in the form:
effort = (system size) x (productivity rate)
For example, system size might be in the form ‘thousands of lines of code’ (KLOC) and
have the specific value of 3 KLOC while the productivity rate was 40 days per KLOC.
A model to forecast software development effort therefore has two key components. The
first is a method of assessing the amount of the work needed. The second assesses the rate
of work at which the task can be done.
For example, planner may estimate that the first software module to be constructed is 2
KLOC. First developer with their expertise, could work at a rate of 40 days per KLOC
per day and complete the work in 2x40 days, i.e. 80 days, while other developer, who is
less experienced, would need 55 days per KLOC and take 2x55, i.e. 110 days to complete
the task. In this case KLOC is a size driver indicating the amount of work to be done,
while developer experience is a productivity driver influencing the productivity or work
rate.

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
The Top-down Approach and Parametric Models -
If we have figures for the effort expended on past projects (in work-days for instance) and
also the system sizes in KLOC, we should be able to work out a productivity rate as

productivity = effort/size

A more sophisticated way of doing this would be by using the statistical technique least
squares regression to derive an equation in the form:

effort = constant1 + (size x constant2)

Some parametric models, such as that implied by function points, are focused on system
or task size, while others, such are COCOMO, are more concerned with productivity
factors.
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
The Top-down Approach and Parametric Models -

The first step is to calculate the total effort required for a project. Once the
overall effort is determined, the challenge is to distribute or allocate portions
of that effort to different activities within the project.

Top-Down Approach: Involves starting with an overview of the project and


then breaking it down into smaller tasks. The estimates are made at a higher
level and then distributed to the lower-level tasks.
Bottom-Up Approach: Involves estimating the individual tasks or
components first and then aggregating them to get an overall estimate. This
approach is more detailed and specific.

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
Expert Judgement: -

Expert judgment involves seeking input from individuals with knowledge of


either the application or development environment to estimate the effort
required for a task, commonly used for modifying existing software.
This method relies on assessing the affected code, drawing estimates based on
familiarity, and often combines informal analogies and bottom-up estimating.
While some perceive expert judgment as guesswork, research indicates that
experts use a blend of past project analogies and detailed analysis. In certain
situations, the Delphi technique may be employed for collaborative decision-
making involving multiple experts.
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
Estimating by Analogy :
This is also called case-based reasoning. In case-based reasoning, the estimator uses
completed projects with similar characteristics (source cases) to estimate the
effort for a new project (target case). The effort of the matching source case
serves as the base estimate, adjusted for differences between the target and
source projects. This method is useful when there's limited data for
generalized conclusions about project drivers or productivity rates.
A problem is identifying the similarities and differences between applications
where you have a large number of past projects to analyse. One attempt to
automate this selection process is the ANGEL software tool. This identifies the
source case that is nearest the target by measuring the Euclidean distance
between cases. The Euclidean distance is calculated as:
distance=square-root of ((target_parameter1 – source_parameter1)2 + . .
. (target_parametern – source_parametern)2 )

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
Estimating by Analogy : -
EXAMPLE:
Say that the cases are being matched on the basis of two parameters, the
number of inputs to and the number of outputs from the application to be built.
The new project is known to require 7 inputs and 15 outputs. One of the past
cases, project A, has 8 inputs and 17 outputs.

The Euclidean distance between the source and the target is therefore the
square-root of ((7 – 8)2 + (15 – 17)2 ), that is 2.24.

The above explanation is simply to give an idea of how Euclidean distance


may be calculated. The ANGEL package uses rather more sophisticated
algorithms based on this principle.
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
Albrecht Function Point Analysis -
Function Point Analysis is a method used for estimating the size and complexity of
software based on the functionality it delivers to the user. It was developed by Allan
Albrecht in the late 1970s. The main idea is to quantify the software in terms of
function points, which are a measure of the functionality provided by the software.
Function points are calculated based on five major components (or ‘external user
types’ in Albrecht’s terminology) such as inputs, outputs, inquiries, internal files, and
external interfaces, that are of benefit to the users.
1. External input types are input transactions which update internal computer files.
These are elementary processes where data crosses the boundary from outside to
inside the system. Examples: User inputs, data uploads, or data feeds from
external sources.
2. External output types are transactions where data is output to the user. These
are elementary processes that send data from inside to outside the boundary of
the system. Examples: Reports, screen displays, or data sent to external systems.

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
Albrecht Function Point Analysis –

3. External inquiry types are transactions initiated by the user which provide information
but do not update the internal files. The user inputs some information that directs the
system to the details required. Examples: Online queries, information retrieval requests.

4. Logical internal file types are the standing files used by the system. These are files
maintained within the boundary of the system that store data used by the application.
Examples: Databases (relational tables or entity types), data stores, or data repositories, .
5. External interface file types allow for output and input that may pass to and from other
computer applications. Examples of this would be the transmission of accounting data
from an order processing system to the main ledger system or the production of a file of
direct debit details on a magnetic or electronic medium to be passed to the Bankers
Automated Clearing System (BACS). Files shared between applications would also be
counted here.
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
Albrecht Function Point Analysis -
The analyst identifies each instance of each external user type in the application. Each
component is then classified as having either high, average or low complexity. The counts
of each external user type in each complexity band are multiplied by specified weights
(see Table) to get FP scores which are summed to obtain an overall FP count which
indicates the information processing size.

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
Albrecht Function Point Analysis -
Originally, Function Points (FPs) as defined by Albrecht and their assessment of external
user types' complexity determination of whether a user type was of high, low, or average
complexity was intuitive. However, the International FP User Group (IFPUG) has now
established rules for assessing this complexity. For instance, when dealing with logical
internal files and external interface files, specific boundaries outlined in Table 5.3 are
utilized to determine the complexity level. Similar tables are also available for assessing
the complexity of external inputs and outputs.

Sum up the counts of five major components (or ‘external user types’) is called Unadjusted Function
Point (UFP) Count.
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
Albrecht Function Point Analysis -
Function point analysis recognizes that the effort required to implement a computer-based
information system relates not just to the number and complexity of the features provided
but also to the operational environment of the system.
Fourteen factors have been identified which can influence the degree of difficulty
associated with implementing a system. The list that Albrecht produced related
particularly to the concerns of information system developers in the late 1970s and early
1980s. Some technology which was then new and relatively threatening is now well
established
The technical complexity adjustment (TCA) calculation has had many problems.

After obtaining the Unadjusted Function Point count, it serves as a basis for further
adjustments to account for various complexity factors. The adjustments involve assigning
weights to different aspects of the software based on factors such as data
communications, distributed data processing, performance, and more.

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
Function Points Mark II -
Function Points (FP) is a software metric used in software engineering for the purpose of
function point analysis (FPA). The Function Points Mark II (FP Mark II) is an extension
of the original Function Points method, which was introduced by Allan J. Albrecht. FP
Mark II is an enhancement to the Function Points method and aims to provide a more
refined and accurate way of estimating the effort required for software development.

The Mark II method, sponsored by the CCTA (now OGC), is a UK government project
standard. It replaces the Albrecht method (now IFPUG) and is labeled 'Mark II' to signify
improvement. Despite refinements to the Albrecht method, FPA Mark II is a minority
method, primarily employed in the United Kingdom.
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
Function Points Mark II -
As with Albrecht, the information processing size is initially measured in unadjusted
function points (UFPs) to which a Technical Complexity Adjustment(TCA) can then be
applied. The assumption is that an information system comprises transactions which have
the basic structure shown in Figure 5.2.

For each transaction the UFPs are calculated:


Wi x (number of input data element types) +
We x (number of entity types referenced) +
Wo x (number of output data element types)

Wi, We, and Wo are weightings derived by asking developers the proportions of effort
spent in previous projects developing the code dealing respectively with inputs, accessing
and modifying stored data and processing outputs.
The proportions of effort are then normalized into ratios, or weightings, which add up to
2.5. This process for calculating weightings is time consuming and most FP counters use
industry averages which are currently 0.58 for Wi , 1 .66 for We and 0.26 for Wo.

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
COSMIC Full Function Points -
International FP User Group (IFPUG) are suitable for information systems, they are not
helpful when it comes to sizing real-time or embedded applications. This has resulted in the
development of another version of function points – the COSMIC FFP (Common Software
Measurement International Consortium) method.
Existing Function Points methods are effective in assessing the work content of information
systems where the size of the internal procedures mirrors the number of external features. With
a real-time, or embedded system, its features will be hidden because the software’s user will
probably not be a human but a hardware device or another software component.
COSMIC addresses software sizing by breaking down the system architecture into a layered
structure. Each software component within this hierarchy can both receive requests from
higher layers and request services from lower layers. Peer-to-peer communication may also
occur among components at the same level. This approach helps identify the boundaries of the
software component being assessed, determining where it receives inputs and transmits
outputs. The inputs and outputs are then organized into data groups, each grouping together
data items that pertain to the same object of interest.
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
COSMIC Full Function Points -
Data groups can be moved about in four ways:
 entries (E), which are effected by sub-processes that move the data group into
the software component in question from a ‘user’ outside its boundary – this
could be from another layer or another separate software component in the same
layer via peer-to-peer communication;
 exits (X), which are effected by sub-processes that move the data group from the
software component to a ‘user’ outside its boundary;
 reads (R), which are data movements that move data groups from persistent
storage (such as a database) into the software component;
 writes (W), which are data movements that transfer data groups from the
software component into persistent storage.
The overall FFP count is derived by simply adding up the counts for each of the four types of
data movement. The resulting units are Cfsu (COSMIC functional size units). The method
does not take account of any processing of the data groups once they have been moved into
the software component.

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
COSMIC Full Function Points -
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
COCOMO II: A Parametric Productivity Model
COCOMO II, or Constructive Cost Model II, is a software development effort
estimation model that builds upon its predecessor, COCOMO (COnstructive COst
MOdel). COCOMO II is a parametric model developed by Barry Boehm, and it
provides a structured and systematic approach for estimating the effort, cost, and
schedule for a software project.
Models could be used with applications other than information systems. The basic
model was built around the equation
(effort) = c(size)k
where effort was measured in pm or the number of ‘person-months’, size was
measured in kdsi, thousands of delivered source code instructions, and c and k were
constants.
The first step was to derive an estimate of the system size in terms of kdsi. The
constants, c and k (see Table 5.4), depended on whether the system could be
classified, in Boehm’s terms, as ‘organic’, ‘semi-detached’ or ‘embedded’.

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
COCOMO II: A Parametric Productivity Model
These related to the technical nature of the system and the development environment.
 Organic mode: This would typically be the case when relatively small teams
developed software in a highly familiar in-house environment and when the
system being developed was small and the interface requirements were flexible.
 Embedded mode: This meant that the product being developed had to operate
within very tight constraints and changes to the system were very costly.
 Semi-detached mode: This combined elements of the organic and the embedded
modes or had characteristics that came between the two.
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
COCOMO II: A Parametric Productivity Model
As we noted earlier, estimates are required at different stages in the system life cycle
and COCOMO II has been designed to accommodate this by having models for three
different stages.
 Application composition -Here the external features of the system that the users
will experience are designed. Prototyping will typically be employed to do this.
With small applications that can be built using high-productivity application-
building tools, development can stop at this point.
 Early design - Here the fundamental software structures are designed. With
larger, more demanding systems, where, for example, there will be large volumes
of transactions and performance is important, careful attention will need to be
paid to the architecture to be adopted.
 Post architecture - Here the software structures undergo final construction,
modification and tuning to create a system that will perform as required.

Unit 5: Software Effort Estimation


Software Effort Estimation Techniques:
COCOMO II: A Parametric Productivity Model
At the early design stage, FPs are recommended as the way of gauging a basic system
size. An FP count may be converted to an LOC equivalent by multiplying the FPs by
a factor for the programming language that is to be used.

The following model can then be used to calculate an estimate of person-months.


pm = A(size)(sf) x (em1) x (em2) x . . . x (emn)
where pm is the effort in ‘person-months’, A is a constant (which was set in 2000 at
2.94), size is measured in kdsi (which may have been derived from an FP count as
explained above), sf is exponent scale factor and em is effort multipliers

The scale factor is derived thus:


sf = B + 0.01 x ∑ (exponent driver ratings)
where B is a constant currently set at 0.91. The effect of the exponent (‘. . . to the
power of. . .’) scale factor is to increase the effort predicted for larger projects, that is,
to take account of diseconomies of scale which make larger projects less productive.
Unit 5: Software Effort Estimation
Software Effort Estimation Techniques:
COCOMO II: A Parametric Productivity Model

Unit 5: Software Effort Estimation


Cost Estimation:

Cost Estimation = Software Development cost (Effort) + Overhead Costs

 Project cost can be obtained by multiplying the estimated effort (in man-month,
from the effort estimate) with the manpower cost per month. Project cost
computation is the assumption that the entire project cost is incurred on account
of the manpower cost alone.
 However, in addition to manpower cost, a project would incur several other types
of costs which we shall refer to as the overhead costs. The overhead costs would
include the costs of hardware and software required for the project and the
company overheads for administration, office space, etc.
 Depending on the expected values of the overhead costs, the project manager has
to suitably scale up the cost estimated by using the COCOMO formula.
Unit 5: Software Effort Estimation
Staffing Pattern:
 After the effort required to complete a software project has been estimated, the
staffing requirement for the project can be determined.
 Putnam was the first to study the problem of what should be a proper staffing
pattern for software projects.
 He extended the classical work of Norden who had earlier investigated the
staffing pattern of general research and development (R&D) type of projects.
 In order to appreciate the staffing pattern desirable for software projects, we must
understand both Norden’s and Putnam’s results.

Unit 5: Software Effort Estimation


Staffing Pattern:
Norden’s work - Norden found the staffing patterns of R&D projects to be very different
from that of manufacturing or sales type of work. In a sales outlet, the number of sales
staff does not usually vary with time. For example, in a supermarket the number of sales
personnel would depend on the number of sales counters alone and the number of sales
personnel therefore remains fixed for years together.
However, the staffing pattern of R&D type of projects changes dynamically over time for
efficient manpower utilization. At the start of an R&D project, the activities of the project
are planned and initial investigations are made. During this time, the manpower
requirements are low. As the project progresses, the manpower requirement increases
until it reaches a peak. Thereafter the manpower requirement gradually diminishes.
Norden concluded that the staffing pattern for any R&D project can be approximated by
the Rayleigh distribution curve.

FIGURE: Rayleigh–Norden Curve


Unit 5: Software Effort Estimation
Staffing Pattern:
Putnam’s work - Putnam studied the problem of staffing of software projects and found
that the staffing pattern for software development projects has characteristics very similar
to R&D projects.
Putnam adapted the Rayleigh–Norden curve to relate the number of delivered lines of
code to the effort and the time required to develop the product. Only a small number of
developers are needed at the beginning of a project to carry out the planning and
specification tasks. As the project progresses and more detailed work is performed, the
number of developers increases and reaches a peak during product delivery which has
been shown to occur at time TD in Figure. After product delivery, the number of project
staff falls consistently during product maintenance.
Putnam suggested that starting from a small number of
developers, there should be a staff build-up and after a
peak size has been achieved, staff reduction is required.
However, the staff build-up should not be carried out in
large installments. Experience shows that a very rapid
build-up of project staff any time during the project
development correlates with schedule slippage.

Unit 5: Software Effort Estimation


Effect of Schedule Compression:

Impact of schedule compression on project cost.

where pmnew is the new effort, pmorg is the originally estimated effort and tdorg is the
originally estimated time for project completion and tdnew is the compressed schedule.
Unit 5: Software Effort Estimation
Effect of Schedule Compression:
It is quite common for a project manager to encounter client requests to deliver products
faster, that is, to compress the delivery schedule.
It is quite common for a project manager to encounter client requests to deliver products
faster, that is, to compress the delivery schedule. It is therefore important to understand
the impact of schedule compression on project cost. Putnam studied the effect of
schedule compression on the development effort and expressed it in the form of the
following equation:

where pmnew is the new effort, pmorg is the originally estimated effort and tdorg is the
originally estimated time for project completion and tdnew is the compressed schedule.
From this expression, it can easily be observed that when the schedule of a project is
compressed, the required effort increases in proportion to the fourth power of the degree of
compression. It means that a relatively small compression in a delivery schedule can result in
substantial penalty on human effort. For example, if the estimated development time using
COCOMO formula is one year, then in order to develop the product in six months, the total
effort required (and hence the project cost) increases 16 times.

Unit 5: Software Effort Estimation


Effect of Schedule Compression:
It is quite common for a project manager to encounter client requests to deliver products
faster, that is, to compress the delivery schedule.
It is quite common for a project manager to encounter client requests to deliver products
faster, that is, to compress the delivery schedule. It is therefore important to understand
the impact of schedule compression on project cost. Putnam studied the effect of
schedule compression on the development effort and expressed it in the form of the
following equation:

where pmnew is the new effort, pmorg is the originally estimated effort and tdorg is the
originally estimated time for project completion and tdnew is the compressed schedule.
From this expression, it can easily be observed that when the schedule of a project is
compressed, the required effort increases in proportion to the fourth power of the degree of
compression. It means that a relatively small compression in a delivery schedule can result in
substantial penalty on human effort. For example, if the estimated development time using
COCOMO formula is one year, then in order to develop the product in six months, the total
effort required (and hence the project cost) increases 16 times.
Unit 5: Software Effort Estimation
Effect of Schedule Compression:
Boehm arrived at the result that there is a limit beyond which a software project cannot
reduce its schedule by buying any more personnel or equipment. This limit occurs
roughly at 75% of the nominal time estimate for small and medium sized projects.
Thus, if a project manager accepts a customer demand to compress the development
schedule of a typical project (medium or small project) by more than 25%, he is very
unlikely to succeed. The main reason being, that every project has only a limited amount
of activities which can be carried out in parallel, and the sequential activities cannot be
speeded up by hiring any number of additional developers.

Unit 5: Software Effort Estimation


Capers Jones Estimating Rules of Thumb:
 Rule-1:SLOC Function Point Equivalence - One function point = 125 SLOC for C
programs.
 Rule-2:Project Duration Estimation - Function points raised to the power 0.4
predicts the approximate development time in calendar months.
 Rule-3:Rate of Requirements Creep - User requirements creep in at an average rate
of 2% per month from the design through coding phases.
 Rule-4:Defect Removal Efficiency - Each software review, inspection, or test step
will find and remove 30% of the bugs that are present.
 Rule-5:Project Manpower Estimation - The size of the software (in function points)
divided by 150 predicts the approximate number of the personnel required for
developing the application.
 Rule-6:Software Development Effort Estimation - The approximate number of staff
months of effort required to develop a software is given by the software development
time multiplied with the number of personnel required.
 Rule-7:Personnel required for regular maintenance activities - Function points
divided by 500 predicts the approximate number of personnel.

You might also like