0% found this document useful (0 votes)
46 views

Spiral Model of The Software Process

Uploaded by

Amol M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Spiral Model of The Software Process

Uploaded by

Amol M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Spiral model

The spiral model is shown in Figure 7. Here, the software process is represented as a spiral, rather
than a sequence of activities with some backtracking from one activity to another. Each loop in the spiral
represents a phase of the software process. Thus, the innermost loop might be concerned with system
feasibility, the next loop with requirements definition, the next loop with system design, and so on. The
spiral model combines change avoidance with change tolerance. It assumes that changes are a result of
project risks and includes explicit risk management activities to reduce these risks.

Figure 7: spiral model of the software process

Each loop in the spiral is split into four sectors:


 Objective setting:Specific objectives for that phase of the project are defined. Constraints on the
process and the product are identified and a detailed management plan is drawn up. Project risks are
identified. Alternative strategies, depending on these risks, may be planned.
 Risk assessment and reduction:For each of the identified project risks, a detailed analysis is carried
out. Steps are taken to reduce the risk.
 Development and validation:After risk evaluation, a development model for the system is chosen.
 Planning:The project is reviewed and a decision made whether to continue with a further loop of the
spiral. If it is decided to continue, plans are drawn up for the next phase of the project.

The main difference between the spiral model and other software process models is its explicit
recognition of risk.
WHAT IS OBJECT ORIENTATION?
□ Definition: OO means that we organize software as a collection of discrete objects
(that incorporate both data structure and behavior).
□ There are four aspects (characteristics) required by an OO approacho
Identity.
 Classification.
 Inheritance.
 Polymorphism.
□ Identity:
 Identity means that data is quantized into discrete, distinguishable entities
called objects.
 E.g. for objects: personal computer, bicycle, queen in chess etc.

Objects can be concrete (such as a file in a file system) or conceptual (such
as scheduling policy in a multiprocessing OS). Each object has its own inherent
identity. (i.e two objects are distinct even if all their attribute values are identical).
 In programming languages, an object is referenced by a unique handle.
□ Classification:
 Classification means that objects with the same data structure (attribute) and
behavior (operations) are grouped into a class.
 E.g. paragraph, monitor, chess piece.
 Each object is said to be an instance of its class.
 Fig below shows objects and classes: Each class describes a possibly infinite
set of individual objects.
Inheritance:
 It is the sharing of attributes and operations (features) among classes based
on a hierarchical relationship. A super class has general information that sub classes
refine and elaborate.
 E.g. Scrolling window and fixed window are sub classes of window.
□ Polymorphism:
 Polymorphism means that the same operation may behave differently for
different classes.
 For E.g. move operation behaves differently for a pawn than for the queen in
a chess game.
WHAT IS OO DEVELOPMENT?
□ Development refers to the software life cycle: Analysis, Design and
Implementation. The essence of OO Development is the identification and
organization of application concepts, rather than their final representation in a
programming language. It’s a conceptual process independent of programming
languages. OO development is fundamentally a way of thinking and not a
programming technique.
OO methodology
□ Here we present a process for OO development and a graphical notation for
representing OO concepts. The process consists of building a model of an
application and then adding details to it during design.
□ The methodology has the following stages
 System conception: Software development begins with business analysis or
users conceiving an application and formulating tentative requirements.
 Analysis: The analyst scrutinizes and rigorously restates the requirements
from the system conception by constructing models. The analysis model is a concise,
precise abstraction of what the desired system must do, not how it will be done.
 The analysis model has two parts-
 □ Domain Model-
system.
 □ Application Model- a description of parts of the application system itself
that are visible to the user.
 E.g. In case of stock broker application-
 Domain objects may include- stock, bond, trade & commission.
 Application objects might control the execution of trades and present the
results.
 System Design: The development teams devise a high-level strategy- The
System Architecture- for solving the application problem. The system designer
should decide what performance characteristics to optimize, chose a strategy of
attacking the problem, and make tentative resource allocations.
 Class Design: The class designer adds details to the analysis model in
accordance with the system design strategy. His focus is the data structures and
algorithms needed to implement each class.
 Implementation: Implementers translate the classes and relationships
developed during class design into a particular programming language, database or
hardware. During implementation, it is important to follow good software
engineering practice.
Three models
□ We use three kinds of models to describe a system from different view points.
1. Class Model—for the objects in the system & their relationships.
It describes the static structure of the objects in the system and their
relationships.
Class model contains class diagrams- a graph whose nodes are classes and arcs
are relationships among the classes.
2. State model—for the life history of objects.
It describes the aspects of an object that change over time. It specifies and
implements control with state diagrams-a graph whose nodes are states and whose
arcs are transition between states caused by events.
3. Interaction Model—for the interaction among objects.
It describes how the objects in the system co-operate to achieve broader results.
This model starts with use cases that are then elaborated with sequence and activity
diagrams.
Use case – focuses on functionality of a system – i.e what a system does for
users.
Sequence diagrams – shows the object that interact and the time sequence of their
interactions.
Activity diagrams – elaborates important processing steps.

OO THEMES
Several themes pervade OO technology. Few are –
1. Abstraction
 Abstraction lets you focus on essential aspects of an application while
ignoring details i.e focusing on what an object is and does, before deciding how to
implement it.
 It’s the most important skill required for OO development.
2. Encapsulation (information hiding)
 It separates the external aspects of an object (that are accessible to other
objects) from the internal implementation details (that are hidden from other objects)
 Encapsulation prevents portions of a program from becoming so
interdependent that a small change has massive ripple effects.
3. Combining data and behavior
 Caller of an operation need not consider how many implementations exist.
 In OO system the data structure hierarchy matches the operation inheritance
 hierarchy (fig).

4. Sharing
 OO techniques provide sharing at different levels.
 Inheritance of both data structure and behavior lets sub classes share
common code.
 OO development not only lets you share information within an application,
but also offers the prospect of reusing designs and code on future projects.
5. Emphasis on the essence of an object
 OO development places a greater emphasis on data structure and a lesser
emphasis on procedure structure than functional-decomposition methodologies.
6. Synergy
 Identity, classification, polymorphism and inheritance characterize OO
languages.
THE THREE MODELS
1. Class Model: represents the static, structural, “data” aspects of a system.
 It describes the structure of objects in a system- their identity, their
relationships to other objects, their attributes, and their operations.
 Goal in constructing class model is to capture those concepts from the real
world that are important to an application.
 Class diagrams express the class model.
2. State Model: represents the temporal, behavioral, “control” aspects of a
system.
 State model describes those aspects of objects concerned with time and the
sequencing of operations – events that mark changes, states that define the context
for events, and the organization of events and states.
 State diagram express the state model.
 Each state diagram shows the state and event sequences permitted in a
system for one class of objects.
 State diagram refer to the other models.
 Actions and events in a state diagram become operations on objects in the
class model. References between state diagrams become interactions in the
interaction model.
3. Interaction model – represents the collaboration of individual objects, the
“interaction” aspects of a system.
 Interaction model describes interactions between objects – how individual
objects collaborate to achieve the behavior of the system as a whole.
 The state and interaction models describe different aspects of behavior, and
you need both to describe behavior fully.
 Use cases, sequence diagrams and activity diagrams document the interaction
model.

GENERALIZATION AND INHERITANCE


□ Generalization is the relationship between a class (the superclass) and one or
more variations of the class (the subclasses). Generalization organizes classes by
their similarities and differences, structuring the description of objects.
□ The superclass holds common attributes, operations and associations; the
subclasses add specific attributes, operations and associations. Each subclass is said
to inherit the features of its superclass.
□ There can be multiple levels of generalization.
□ Fig(a) and Fig(b) (given in the following page) shows examples of generalization.
□ Fig(a) – Example of generalization for equipment.
Each object inherits features from one class at each level of generalization.
□ UML convention used:
Use large hollow arrowhead to denote generalization. The arrowhead points to
superclass.
□ Fig(b) – inheritance for graphic figures.
The word written next to the generalization line in the diagram (i.e dimensionality) is
a generalization set name. A generalization set name is an enumerated attribute that
indicates which aspect of an object is being abstracted by a particular generalization.
It is optional.
□ Use of generalization: Generalization has three purposes –
1. To support polymorphism: You can call an operation at the superclass
level, and the OO language complier automatically resolves the call to the method
that matches the calling object’s class.
2. To structure the description of objects: i.e to frame a taxonomy and
organizing objects on the basis of their similarities and differences.
3. To enable reuse of code: Reuse is more productive than repeatedly writing
code from scratch.
□ Note: The terms generalization, specialization and inheritance all refer to aspects
of the same idea.
Overriding features
□ A subclass may override a superclass feature by defining a feature with the same
name. The overriding feature (subclass feature) refines and replaces the overridden
feature (superclass feature) .
□ Why override feature?
 To specify behavior that depends on subclass.
Event - driven modelling
Event-driven modeling shows how a system responds to external and internal events. It is based on
the assumption that a system has a finite number of states and that events (stimuli) may cause a transition
from one state to another. The UML supports event-based modeling using state diagrams. State diagrams
show system states and events that cause transitions from one state to another. They do not show the flow of
data within the system but may include additional information on the computations carried out in each state.
In UML state diagrams, rounded rectangles represent system states. They may include a brief
description (following ‘do’) of the actions taken in that state. The labeled arrows represent stimuli that force
a transition from one state to another. You can indicate start and end states using filled circles, as in activity
diagrams.
From Figure 15, you can see that the system starts in a waiting state and responds initially to either
the full-power or the half-power button. Users can change their mind after selecting one of these and press
the other button. The time is set and, if the door is closed, the Start button is enabled. Pushing this button
starts the oven operation and cooking takes place for the specified time. This is the end of the cooking cycle
and the system returns to the waiting state.

Figure 15: State diagram of a microwave oven

Data-driven modeling
Data-driven models show the sequence of actions involved in processing input data and generating
an associated output. They are particularly useful during the analysis of requirements as they can be used to
show end-to-end processing in a system. That is, they show the entire sequence of actions that take place
from an input being processed to the corresponding output, which is the system’s response.
In the 1970s, structured methods such as DeMarco’s Structured Analysis (DeMarco, 1978)
introduced data-flow diagrams (DFDs) as a way of illustrating the processing steps in a system. The UML
does not support data-flow diagrams as they were originally proposed and used for modeling data
processing. The reason for this is that DFDs focus on system functions and do not recognize system objects.
However, because data-driven systems are so common in business, UML 2.0 introduced activity diagrams,
which are similar to data-flow diagrams. For example, Figure 13 shows the chain of processing involved in
the insulin pump software. In this diagram, you can see the processing steps (represented as activities) and
the data flowing between these steps (represented as objects).
Model-driven architecture
Model-driven architecture is a model-focused approach to software design and implementation that
uses a sub-set of UML models to describe a system. Here, models at different levels of abstraction are
created.

The MDA method recommends that three types of abstract system model should be produced:
 A computation independent model (CIM) that models the important domain abstractions used in
the system. CIMs are sometimes called domain models. You may develop several different CIMs,
reflecting different views of the system.
 A platform independent model (PIM) that models the operation of the system without reference to
its implementation.
 Platform specific models (PSM) which are transformations of the platform independent model with
a separate PSM for each application platform.

Figure 17: MDA transformations


Executable UML
The fundamental notion behind model-driven engineering is that completely automated
transformation of models to code should be possible. To achieve this, you have to be able to construct
graphical models whose semantics are well defined. You also need a way of adding information to graphical
models about the ways in which the operations defined in the model are implemented. This is possible using
a subset of UML 2, called Executable UML or xUML.
To create an executable sub-set of UML, the number of model types has therefore been dramatically
reduced to three key model types:
 Domain models identify the principal concerns in the system. These are defined using UML class
diagrams that include objects, attributes, and associations.
 Class models, in which classes are defined, along with their attributes and operations.
 State models, in which a state diagram is associated with each class and is used to describe the
lifecycle of the class.
The dynamic behavior of the system may be specified declaratively using the object constraint language
(OCL) or may be expressed using UML’s action language. The action language is like a very high-level
programming language where you can refer to objects and their attributes and specify actions to be carried
out.

2.8 The Rational Unified Process


RUP is a good example of a hybrid process model. It brings together elements from all of the generic
process models, illustrates good practice in specification and design and supports prototyping and
incremental delivery.
The RUP recognizes that conventional process models present a single view of the process. In
contrast, the RUP is normally described from three perspectives:
 A dynamic perspective, which shows the phases of the model over time.
 A static perspective, which shows the process activities that are enacted.
 A practice perspective, which suggests good practices to be used during the process.

Dynamic Perspective
Figure 19 shows the phases in the RUP. These are:
 Inception:The goal of the inception phase is to establish a business case for the system. You should
identify all external entities (people and systems) that will interact with the system and define these
interactions. You then use this information to assess the contribution that the system makes to the
business. If this contribution is minor, then the project may be cancelled after this phase.

Figure 19: Phases in the Rational Unified Process

 Elaboration:The goals of the elaboration phase are to develop an understanding of the problem
domain, establish an architectural framework for the system, develop the project plan, and identify
key project risks.
 Construction:The construction phase involves system design, programming, and testing. Parts of the
system are developed in parallel and integrated during this phase.
 Transition:The final phase of the RUP is concerned with moving the system from the development
community to the user community and making it work in a real environment.
Object-oriented design using the UML
An object-oriented system is made up of interacting objects that maintain their own local state and
provide operations on that state. The representation of the state is private and cannot be accessed directly
from outside the object. Object-oriented design processes involve designing object classes and the
relationships between these classes. These classes define the objects in the system and their interactions.
When the design is realized as an executing program, the objects are created dynamically from these class
definitions.
Object-oriented systems are easier to change than systems developed using functional approaches.
Objects include both data and operations to manipulate that data. They may therefore be understood and
modified as stand-alone entities. Changing the implementation of an object or adding services should not
affect other system objects.
Because objects are associated with things, there is often a clear mapping between real-world entities (such
as hardware components) and their controlling objects in the system. This improves the understandability,
and hence the maintainability, of the design.
To develop a system design from concept to detailed, object-oriented design, there are several things that
you need to do:
 Understand and define the context and the external interactions with the system.
 Design the system architecture.

Implementation issues
The three different issues that need to be considered during software implementation are:
 Reuse:Most modern software is constructed by reusing existing components or systems. When you
are developing software, you should make as much use as possible of existing code.
 Configuration management:During the development process, many different versions of each
software component are created. If you don’t keep track of these versions in a configuration
management system, you are liable to include the wrong versions of these components in your
system.
 Host-target development:Production software does not usually execute on the same computer as
the software development environment. Rather, you develop it on one computer (the host system)
and execute it on a separate computer (the target system). The host and target systems are sometimes
of the same type but, often they are completely different.

Reuse
From the 1960s to the 1990s, most new software was developed from scratch, by writing all code in
a high-level programming language. The only significant reuse or software was the reuse of functions and
objects in programming language libraries. Consequently, an approach to development based around the
reuse of existing software emerged and is now generally used for business systems, scientific software, and,
increasingly, in embedded systems engineering.
Software reuse is possible at a number of different levels:
 The abstraction level:At this level, you don’t reuse software directly but rather use knowledge of
successful abstractions in the design of your software.
 The object level:At this level, you directly reuse objects from a library rather than writing the code
yourself. To implement this type of reuse, you have to find appropriate libraries and discover if the
objects and methods offer the functionality that you need.
 The component level:Components are collections of objects and object classes that operate together
to provide related functions and services. You often have to adapt and extend the component by
adding some code of your own.
 The system level:At this level, you reuse entire application systems. This usually involves some
kind of configuration of these systems. This may be done by adding and modifying code or by using
the system’s own configuration interface.
Configuration management
 Version management: where support is provided to keep track of the different versions of software
components. Version management systems include facilities to coordinate development by several
programmers. They stop one developer overwriting code that has been submitted to the system by
someone else.

 System integration: where support is provided to help developers define what versions of
components are used to create each version of a system. This description is then used to build a
system automatically by compiling and linking the required components.
 Problem tracking: where support is provided to allow users to report bugs and other problems, and
to allow all developers to see who is working on these problems and when they are fixed.
programmers. They stop one developer overwriting code that has been submitted to the system by
someone else.

 System integration: where support is provided to help developers define what versions of
components are used to create each version of a system. This description is then used to build a
system automatically by compiling and linking the required components.
 Problem tracking: where support is provided to allow users to report bugs and other problems, and
to allow all developers to see who is working on these problems and when they are fixed.

Host-target development
Most software development is based on a host-target model. Software is developed on one computer
(the host), but runs on a separate machine (the target). More generally, we can talk about a development
platform and an execution platform. A platform is more than just hardware. It includes the installed
operating system plus other supporting software such as a database management system or, for development
platforms, an interactive development environment.
However, for distributed systems, you need to decide on the specific platforms where the components
will be deployed. Issues that you have to consider in making this decision are:
 The hardware and software requirements of a component:If a component is designed for specific
hardware architecture, or relies on some other software system, it must obviously be deployed on a
platform that provides the required hardware and software support.
 The availability requirements of the system:High-availability systems may require components to
be deployed on more than one platform. This means that, in the event of platform failure, an
alternative implementation of the component is available.
 Component communications:If there is a high level of communications traffic between
components, it usually makes sense to deploy them on the same platform or on platforms that are
physically close to one other. This reduces communications latency, the delay between the time a
message is sent by one component and received by another.
Test-driven development
Test-driven development (TDD) is an approach to program development in which you interleave
testing and code development. Essentially, you develop the code incrementally, along with a test for that
increment. You don’t move on to the next increment until the code that you have developed passes its test.

The fundamental TDD process is shown in below figure. The steps in the process are as follows:

Test-driven development

 Start by identifying the increment of functionality that is required. This should normally be small and
implementable in a few lines of code.
 Write a test for this functionality and implement this as an automated test. This means that the test
can be executed and will report whether or not it has passed or failed.
 Then run the test, along with all other tests that have been implemented. Initially, you have not
implemented the functionality so the new test will fail. This is deliberate as it shows that the test adds
something to the test set.
 Then implement the functionality and re-run the test. This may involve refactoring existing code to
improve it and add new code to what’s already there.
 Once all tests run successfully, you move on to implementing the next chunk of functionality.

Benefits of test-driven development are:


 Code coverage:In principle, every code segment that you write should have at least one associated
test. Therefore, you can be confident that all of the code in the system has actually been executed.
Code is tested as it is written so defects are discovered early in the development process.
 Regression testing:A test suite is developed incrementally as a program is developed. You can
always run regression tests to check that changes to the program have not introduced new bugs.
 Simplified debugging:When a test fails, it should be obvious where the problem lies. The newly
written code needs to be checked and modified.
 System documentation:The tests themselves act as a form of documentation that describe what the
code should be doing. Reading the tests can make it easier to understand the code.

User testing
User or customer testing is a stage in the testing process in which users or customers provide input
and advice on system testing.
In practice, there are three different types of user testing:
1. Alpha testing, where users of the software work with the development team to test the software at the
developer’s site.
2. Beta testing, where a release of the software is made available to users to allow them to experiment and
to raise problems that they discover with the system developers.
3. Acceptance testing, where customers test a system to decide whether or not it is ready to be accepted
from the system developers and deployed in the customer environment.

There are six stages in the acceptance testing process, as shown in below Figure. They are:
Software Evolution
Software development does not stop when a system is delivered but continues throughout the
lifetime of the system. After a system has been deployed, it inevitably has to change if it is to remain useful.
Business changes and changes to user expectations generate new requirements for the existing software.
Parts of the software may have to be modified to correct errors that are found in operation, to adapt it for
changes to its hardware and software platform, and to improve its performance or other non-functional
characteristics.
Software evolution is important because organizations have invested large amounts of money in their
software and are now completely dependent on these systems. Software evolution may be triggered by
changing business requirements, by reports of software defects, or by changes to other systems in a software
system’s environment.
You should, therefore, think of software engineering as a spiral process with requirements, design,
implementation, and testing going on throughout the lifetime of the system, it is shown in below Figure. You
start by creating release 1 of the system. Once delivered, changes are proposed and the development of
release 2 starts almost immediately. In fact, the need for evolution may become obvious even before the
system is deployed so that later releases of the software may be under development before the current
version has been released.

A spiral model of development and evolution

Software reengineering
Reengineering may involve redocumenting the system, refactoring the system architecture,
translating programs to a modern programming language, and modifying and updating the structure and
values of the system’s data. The functionality of the software is not changed and, normally, you should try to
avoid making major changes to the system architecture.
There are two important benefits from reengineering rather than replacement:
 Reduced riskThere is a high risk in redeveloping business-critical software. Errors may be made in
the system specification or there may be development problems. Delays in introducing the new
software may mean that business is lost and extra costs are incurred.
 Reduced costThe cost of reengineering may be significantly less than the cost of developing new
software. Ulrich (1990) quotes an example of a commercial system for which the reimplementation
costs were estimated at $50 million. The system was successfully reengineered for $12 million. I
suspect that, with modern software technology, the relative cost of reimplementation is probably
lessthan this but will still considerably exceed the costs of reengineering.
The below figure is a general model of the reengineering process. The input to the process is a legacy
program and the output is an improved and restructured version of the same program. The activities in this
reengineering process are as follows:

The reengineering process


 Source code translationUsing a translation tool, the program is converted from an old programming
language to a more modern version of the same language or to a different language.
 Reverse engineeringThe program is analyzed and information extracted from it. This helps to
document its organization and functionality. Again, this process is usually completely automated.
 Program structure improvementThe control structure of the program is analyzed and modified to
make it easier to read and understand. This can be partially automated but some manual intervention
is usually required.
 Program modularizationRelated parts of the program are grouped together and, where appropriate,
redundancy is removed. In some cases, this stage may involve architectural refactoring (e.g., a
system that uses several different data stores may be refactored to use a single repository). This is a
manual process.
 Data reengineeringThe data processed by the program is changed to reflect program changes. This
may mean redefining database schemas and converting existing databases to the new structure. You
should usually also clean up thedata. This involves finding and correcting mistakes, removing
duplicate records, etc. Tools are available to support data reengineering.
required in future. This can often simply be removed.

3.11 Legacy system management


Most organizations usually have a portfolio of legacy systems that they use, with a limited budget for
maintaining and upgrading these systems. They have to decide how to get the best return on their
investment. This involves making a realistic assessment of their legacy systems and then deciding on the
most appropriate strategy for evolving these systems. There are four strategic options:
 Scrap the system completelyThis option should be chosen when the system is not making an
effective contribution to business processes. This commonly occurs when business processes have
changed since the system was installed and are no longer reliant on the legacy system.
 Leave the system unchanged and continue with regular maintenanceThisoption should be
chosen when the system is still required but is fairly stable and the system users make relatively few
change requests.
 Reengineer the system to improve its maintainabilityThis option should be chosen when the
system quality has been degraded by change and where a new change to the system is still being
proposed. This process may include developing new interface components so that the original system
can work with other, newer systems.
 Replace all or part of the system with a new systemThis option should be chosen when factors,
such as new hardware, mean that the old system cannot continue in operation or where off-the-shelf
systems would allow the new system to be developed at a reasonable cost. In many cases, an
evolutionary replacement strategy can be adopted in which major system components are replaced
-
There are four clusters of systems:
 Low quality, low business valueKeeping these systems in operation will be expensive and the rate
of the return to the business will be fairly small. These systems should be scrapped.
 Low quality, high business valueThese systems are making an important business contribution so
they cannot be scrapped. However, their low quality means that it is expensive to maintain them.
These systems should be reengineered to improve their quality. They may be replaced, if a suitable
off-the-shelf system is available.
 High quality, low business valueThese are systems that don’t contribute much to the business but
which may not be very expensive to maintain. It is not worth replacing these systems so normal
system maintenance may be continued if expensive changes are not required and the system
hardware remains in use. If expensive changes become necessary, the software should be scrapped.
 High quality, high business valueThese systems have to be kept in operation. However, their high
quality means that you don’t have to invest in transformation or system replacement. Normal system
maintenance should be continued.

Project Planning
Project planning is one of the most important jobs of a software project manager.
As a manager, you have to break down the work into parts and assign these to project
team members, anticipate problems that might arise, and prepare tentative solutions to
those problems. The project plan, which is created at the start of a project, is used to
communicate how the work will be done to the project team and customers, and to help
assess progress on the project.
Project planning takes place at three stages in a project life cycle:
 At the proposal stage, when you are bidding for a contract to develop or provide a
software system. You need a plan at this stage to help you decide if you have the
resources to complete the work and to work out the price that you should quote to
a customer.
 During the project startup phase, when you have to plan who will work on the
project, how the project will be broken down into increments, how resources will
be allocated across your company, etc. Here, you have more information than at
the proposal stage, and can therefore refine the initial effort estimates that you
have prepared.
 Periodically throughout the project, when you modify your plan in light of
experience gained and information from monitoring the progress of the work. You
learn more about the system being implemented and capabilities of your
development team. This information allows you to make more accurate estimates
of how long the work will take. Furthermore, the software requirements are likely
to change and this usually means that the work breakdown has to be altered and
the schedule extended.
4.3 Software pricing
In principle, the price of a software product to a customer is simply the cost of
development plus profit for the developer. In practice, however, the relationship between
the project cost and the price quoted to the customer is not usually so simple.

Factors affecting software pricing

4.4 Plan driven development


Plan-driven or plan-based development is an approach to software engineering
where the development process is planned in detail. A project plan is created that records
the work to be done, who will do it, the development schedule, and the work products.
Managers use the plan to support project decision making and as a way of measuring
progress. Plan-driven development is based on engineering project management
techniques and can be thought of as the ‘traditional’ way of managing large software
development projects. This contrasts with agile development, where many decisions
affecting the development are delayed and made later, as required, during the
development process.

Project plans
In a plan-driven development project, a project plan sets out the resources
available to the project, the work breakdown, and a schedule for carrying out the work.
The plan should identify risks to the project and the software under development, and the
approach that is taken to risk management. Although the specific details of project plans
vary depending on the type of project and organization, plans normally include the
following sections:
 IntroductionThis briefly describes the objectives of the project and sets out the
constraints (e.g., budget, time, etc.) that affect the management of the project.
 Project organizationThis describes the way in which the development team is
organized, the people involved, and their roles in the team.
 Risk analysisThis describes possible project risks, the likelihood of these risks
arising, and the risk reduction strategies that are proposed.
 Hardware and software resource requirementsThis specifies the hardware and
support software required to carry out the development. If hardware has to be
bought, estimates of the prices and the delivery schedule may be included.
 Work breakdownThis sets out the breakdown of the project into activities and
identifies the milestones and deliverables associated with each activity. Milestones
are key stages in the project where progress can be assessed; deliverables are work
The planning process
Project planning is an iterative process that starts when you create an initial
project plan during the project startup phase. Below figure shows the UML activity
diagram that shows a typical workflow for a project planning process.

The project planning process


Plan changes are inevitable. As more information about the system and the
project team becomes available during the project, you should regularly revise the plan to
reflect requirements, schedule, and risk changes. Changing business goals also leads to
changes in project plans. As business goals change, this could affect all projects, which
may then have to be replanned.
At the beginning of a planning process, you should assess the constraints affecting
the project. These constraints are the required delivery date, staff available, overall
budget, available tools, and so on. In conjunction with this, you should also identify the
project milestones and deliverables. Milestones are points in the schedule against which
you can assess progress, for example, the handover of the system for testing. Deliverables
are work products that are delivered to the customer (e.g., a requirements document for
the system).
The outcome of a review may be a decision to cancel a project. This may be a
result of technical or managerial failings but, more often, is a consequence of external
changes that affect the project. The development time for a large software project is often
several years. During that time, the business objectives and priorities inevitably change.
These changes may mean that the software is no longer required or that the original
project requirements are inappropriate. Management may then decide to stop software
development or to make major changes to the project to reflect the changes in the
organizational objectives.

Project scheduling
Project scheduling is the process of deciding how the work in a project will be
organized as separate tasks, and when and how these tasks will be executed. You estimate
the calendar time needed to complete each task, the effort required, and who will work on
the tasks that have been identified. You also have to estimate the resources needed to
complete each task, such as the disk space required on a server, the time required on
specialized hardware, such as a simulator, and what the travel budget will be. In terms of
the planning stages that I discussed in the introduction of this chapter, an initial project
schedule is usually created during the project startup phase. This schedule is then refined
and modified during development planning.
Scheduling in plan-driven projects (As shown in below figure) involves breaking
down the total work involved in a project into separate tasks and estimating the time
required to complete each task. Tasks should normally last at least a week, and no longer
than 2 months. Finer subdivision means that a disproportionate amount of time must be
spent on replanning and updating the project plan. The maximum amount of time for any
task should be around 8 to 10 weeks. If it takes longer than this, the task should be
subdivided for project planning and scheduling.

The project scheduling process

4.7 The COCOMO II model


Several similar models have been proposed to help estimate the effort, schedule,
and costs of a software project. The model that I discuss here is the COCOMO II model.
This is an empirical model that was derived by collecting data from a large number of
software projects. These data were analyzed to discover the formulae that were the best fit
to the observations. These formulae linked the size of the system and product, project and
team factors to the effort to develop the system. COCOMO II is a well-documented and
nonproprietary estimation model.

The submodels (As shown in below figure) that are part of the COCOMO II model are:

COCOMO estimation models

 An application-composition modelThis models the effort required to develop


systems that are created from reusable components, scripting, ordatabase
programming. Software size estimates are based on application points, and a
simple size/productivity formula is used to estimate the effort required.
 An early design modelThis model is used during early stages of the system
design after the requirements have been established. The estimate is based on the
standard estimation formula that I discussed in the introduction, with a simplified
set of seven multipliers. Estimates are based on function points, which are then
converted to number of lines of source code.

You might also like