Chapter 2 Notes
Chapter 2 Notes
Further, this approach usually turns out to be a recipe for project failure when used to develop non-
trivial programs requiring team effort. In contrast to the build and fix style, the software engineering
approaches emphasize software development through a well-defined and ordered set of activities. These
activities are graphically modelled (represented) as well as textually described and are variously called a s
software life cycle model, software development life cycle (SDLC) model, and software development process
model. Several life cycles models have so far been proposed. However, in this Chapter we confine our
attention to only a few important and commonly used ones.
In this chapter, we first discuss a few basic concepts associated with life cycle models. Subsequently,
we discuss the important activities that have been prescribed to be carried out in the classical waterfall model.
This is intended to provide an insight into the activities that are carried out as part of every life cycle model. In
fact, the classical waterfall model can be considered as a basic model and all other life cycle models as
extensions of this model to cater to specific project situations. After discussing the waterfall model, we discuss
a few derivatives of this model. Subsequently we discuss the spiral model that generalises various life cycle
models. Finally, we discuss a few recently proposed life cycle models that are categorized under the umbrella
term agile model. Of late, agile models are finding increasing acceptance among developers and researchers.
The genesis of the agile model can be traced to the radical changes to the types of project that are
being undertaken at present, rather than to any radical innovations to the life cycle models themselves. The
projects have changed from large multi-year product development projects to small services projects now
In this section, we present a few basic concepts concerning the life cycle models.
It is well known that all living organisms undergo a life cycle. For example, when a seed is planted, it
germinates, grows into a full tree, and finally dies. Based on this concept of a biological life cycle, the term
software life cycle has been defined to imply the different stages (or phases) over which a software evolves
from an initial customer request for it, to a fully developed software, and finally to a stage where it is no longer
useful to any user, and then it is discarded. As we have already pointed out, the life cycle of every software
starts with a request for it by one or more customers. At this stage, the customers are usually not clear about all
the features that would be needed, neither can they completely describe the identified features in concrete
terms and can only vaguely describe what is needed. This stage where the customer feels a need for the
software and forms rough ideas about the required features is known as the inception stage. Starting with the
inception stage, a software evolves through a series of identifiable stages (also called phases) on account of the
development activities carried out by the developers, until it is fully developed and is released to the
customers.
Once installed and made available for use, the users start to use the software. This signals the start of
the operation (also called maintenance) phase. As the users use the software, not only do they request for
fixing any failures that they might encounter, but they also continually suggest several improvements and
modifications to the software. Thus, the maintenance phase usually involves continually making changes to the
software to accommodate the bug-fix and change requests from the user. The operation phase is usually the
longest of all phases and constitutes the useful life of a software. Finally, the software is retired, when the
users do not find it any longer useful due to reasons such as changed business scenario, availability of a new
software having improved features and working, changed computing platforms, etc. This forms the essence of
the life cycle of every software. Based on this description, we can define the software life cycle as follows:
The life cycle of a software represents the series of identifiable stages through which it evolves
during its life time.
With this knowledge of a software life cycle, we discuss the concept of a software life cycle model and
explore why it is necessary to follow a life cycle model in professional software development environments.
In any systematic software development scenario, certain well-defined activities need to be performed
by the development team and possibly by the customers as well, for the software to evolve from one stage in
its life cycle to the next. For example, for a software to evolve from the requirements specification stage to the
design stage, the developers need to elicit requirements from the customers, analyse those requirements, and
formally document the requirements in the form of an SRS document.
A software development life cycle (SDLC) model (also called software life cycle model and software
development process model) describes the different activities that need to be carried out for the software to
evolve in its life cycle. Throughout our discussion, we shall use the terms software development life cycle
(SDLC) and software development process interchangeably. However, some authors distinguish an SDLC
from a software development process. In their usage, a software development process describes the life cycle
activities more precisely and elaborately, as compared to an SDLC. Also, a development process may not only
describe various activities that are carried out over the life cycle, but also prescribe a specific methodology to
carry out the activities and recommends the specific documents and other artifacts that should be produced at
the end of each phase. In this sense, the term SDLC can be a more generic term, as compared to the
development process and several development processes may fit the same SDLC.
An SDLC is represented graphically by drawing various stages of the life cycle and showing the
transitions among the phases. This graphical model is usually accompanied by a textual description of various
activities that need to be carried out during a phase before that phase can be complete. In simple words, we can
define an SDLC as follows:
An SDLC graphically depicts the different phases through which a software evolves. It is usually
accompanied by a textual description of the different activities that need to be carried out during each
phase.
Process versus methodology
Though the terms process a n d methodology are at time used interchangeably, there is a subtle
difference between the two. First, the term process has a broader scope and addresses either all the activities
taking place during software development, or certain coarse-grained activities such as design (e.g. design
process), testing (test process), etc. Further, a software process not only identifies the specific activities that
need to be carried out but may also prescribe certain methodology for carrying out each activity. For example,
a design process may recommend that in the design stage, the high-level design activity be carried out using
Hatley and Pirbhai’s structured analysis and design methodology. A methodology, on the other hand,
prescribes a set of steps for carrying out a specific life cycle activity. It may also include the rationale and
philosophical assumptions behind the set of steps through which the activity is accomplished.
A software development process has a much broader scope as compared to a software development
methodology. A process usually describes all the activities starting from the inception of a software to its
maintenance and retirement stages, or at least a chunk of activities in the life cycle. It also recommends
specific methodologies for carrying out each activity. A methodology, in contrast, describes the steps to
carry out only a single or at best a few individual activities.
The primary advantage of using a development process is that it encourages development of software
in a systematic and disciplined manner. Adhering to a process is especially important to the development of
professional software needing team effort. When software is developed by a team rather than by an individual
programmer, use of a life cycle model becomes indispensable
for successful completion of the project.
Software development organizations have realized that adherence to a suitable life cycle model
helps to produce good quality software and that helps minimize the chances of time and cost overruns.
Suppose a single programmer is developing a small program. For example, a student may be
developing code for a classroom assignment. The student might succeed even when he does not strictly follow
a specific development process and adopts a build and fix style of development. However, it is a different ball
game when a professional software is being developed by a team of programmers. Let us now understand the
difficulties that may arise if a team does not use any development process, and the team members are given
complete freedom to develop their assigned part of the software as per their own discretion. Several types of
problems may arise. We illustrate one of the problems using an example. Suppose a software development
problem has been divided into several parts and these parts are assigned to the team members. From then on,
suppose the team members are allowed the freedom to develop the parts assigned to them in whatever way
they like. It is possible that one member might start writing the code for his part while making assumptions
about the input results required from the other parts, another might decide to prepare the test documents first,
and some other developer might start to carry out the design for the part assigned to him. In this case, severe
problems can arise in interfacing the different parts and in managing the overall development. Therefore, ad
hoc development turns out to be is a sure way to have a failed project. Believe it or not, this is exactly what has
caused many project failures in the past!
When a software is developed by a team, it is necessary to have a precise understanding among the
team members as to—when to do what. In the absence of such an understanding, if each member at any time
would do whatever activity he feels like doing. This would be an open invitation to developmental chaos and
project failure. The use of a suitable life cycle model is crucial to the successful completion of a team-based
development project. But do we need an SDLC model for developing a small program. In this context, we
need to distinguish between programming-in-the-small and programming-in-the-large.
It is not enough for an organisation to just have a well-defined development process, but t h e
development process needs to be properly documented. To understand the reason for this, let us consider that a
development organisation does not document its development process. In this case, its developers develop o n l
y an informal understanding of the development process. An informal understanding of the development
process among the team members can create several problems during development. We have identified a few
important problems that may crop up when a development process is not adequately documented. Those
problems are as follows:
A documented process model ensures that every activity in the life cycle is accurately defined. Also,
wherever necessary the methodologies for carrying out the respective activities are described. Without
documentation, the activities and their ordering tend to be loosely defined, leading to confusion and
misinterpretation by different teams in the organisation. For example, code reviews may informally and
inadequately be carried out since there is no documented methodology as to how the code review
should be done. Another difficulty is that for loosely defined activities, the developers tend to use their
subjective judgments. As an example, unless it is explicitly prescribed, the team members would
subjectively decide as to whether the test cases should be designed just after the requirements phase,
after the design phase, or after the coding phase. Also, they would debate whether the test cases should
be documented at all and the rigour with it should be documented.
An undocumented process gives a clear indication to the members of the development teams about the
lack of seriousness on the part of the management of the organisation about following the process.
Therefore, an undocumented process serves as a hint to the developers to loosely follow the process.
The symptoms of an undocumented process are easily visible designs are shabbily done; reviews are
not carried out rigorously, etc.
A project team might often have to tailor a standard process model for use in a specific project. It is
easier to tailor a documented process model when it is required to modify certain activities or phases of
the life cycle. For example, consider a project situation that requires the testing activities to be
outsourced to another organisation. In this case, A documented process model would help to identify
where exactly the required tailoring should occur.
A documented process model, as we discuss later, is a mandatory requirement of the modern quality
assurance standards such as ISO 9000 and SEI CMM. This means that unless a software organisation
has a documented process, it would not qualify for accreditation with any of the quality standards. In
the absence of a quality certification for the organisation, the customers would be suspicious of its
capability of developing quality software and the organisation might find it difficult to win tenders for
software development.
A documented development process forms a common understanding of the activities to be carried
out among the software developers and helps them to develop software in a systematic and disciplined
manner. A documented development process model, besides preventing the misinterpretations that might
occur when the development process is not adequately documented, also helps to identify inconsistencies,
redundancies, and omissions in the development process.
Nowadays, good software development organisations normally document their development process in
the form of a booklet. They expect the developers recruited fresh to their organisation to first master their
software development process during a short induction training that they are made to undergo.
A good SDLC besides clearly identifying the different phases in the life cycle, should unambiguously
define the entry and exit criteria for each phase. The phase entry (or exit) criteria is usually expressed as a set
of conditions that needs to be satisfied for the phase to start (or to
complete). As an example, the phase exit criteria for the software requirements specification phase, can be that
the software requirements specification (SRS) document is ready, has been reviewed internally, and has been
reviewed and approved by the customer. Only after these criteria are satisfied, the next phase can start.
If the entry and exit criteria for various phases are not well-defined, then that would leave enough
scope for ambiguity in starting and ending various phases, and cause lot of confusion among the developers.
Sometimes they might prematurely stop the activities in a phase, and some other times they might continue
working on a phase much after when the phase should have been over. The decision regarding whether a phase
is complete or not becomes subjective and i t becomes difficult for the project manager to accurately tell how
much the development has progressed. When the phase entry and exit criteria are not well-defined, the
developers might close the activities of a phase much before they are complete, giving a false impression of
rapid progress. In this case, it becomes very difficult for the project manager to determine the exact status of
development and track the progress of the project. This usually leads to a problem that is usually identified as
the 99 per cent complete syndrome. This syndrome appears when there the software project manager has no
definite way of assessing the progress of a project, the optimistic team members feel that their work is 99 per
cent complete even when their work is far from completion—making all projections made by the project
manager about the project completion time to be highly inaccurate.
The waterfall model and its derivatives were extremely popular in the 1970s and still are heavily being
used across many development projects. The waterfall model is possibly the most obvious and intuitive way in
which software can be developed through team effort. We can think of the waterfall model as a generic model
that has been extended in many ways for catering to certain specific software development situations to realise
all other software life cycle models. For this reason, after discussing the classical and iterative waterfall
models, we discuss its various extensions.
Classical waterfall model is intuitively the most obvious way to develop software. It is simple but
idealistic. In fact, it is hard to put this model into use in any non-trivial software development project. One
might wonder if this model is hard to use in practical development projects, then why study it at all? The
reason is that all other life cycle models can be thought of as being extensions of the classical waterfall model.
Therefore, it makes sense to first understand the classical waterfall model, to be able to develop a proper
understanding of other life cycle models. Besides, we shall see later in this text that this model though not used
for software development; is implicitly used while documenting software.
The classical waterfall model divides the life cycle into a set of phases as shown in Figure 2.1. It can be
easily observed from this figure that the diagrammatic representation of the classical waterfall model
resembles a multi-level waterfall. This resemblance justifies the name of the model.
The different phases of the classical waterfall model have been shown in Figure 2.1. As shown in
Figure 2.1, the different phases are—feasibility study, requirements analysis and specification, design, coding
and unit testing, integration and system testing, and maintenance. The phases starting from the feasibility study
to the integration and system testing phase are known as the development phases. A software is developed
during the development phases, and at the completion of the development phases, the software is delivered to
the customer. After the delivery of software, customers start to use the software signaling the commencement
of the operation phase. As the customers start to use the software, changes to it become necessary on account
of bug fixes and feature extensions, causing maintenance works to be undertaken. Therefore, the last phase is
also known as the maintenance phase of the life cycle. It needs to be kept in mind that some of the textbooks
have different number and names of the phases.
An activity that spans all phases of software development is project management. Since it spans the
entire project duration, no specific phase is named after it. Project management, nevertheless, is an important
activity in the life cycle and deals with managing t h e software development and maintenance activities.
In the waterfall model, different life cycle phases typically require relatively different amounts of
efforts to be put in by the development team. The relative amounts of effort spent on different phases for a
typical software has been shown in Figure 2.2. Observe from Figure 2.2 that among all the life cycle phases,
the maintenance phase normally requires the maximum effort. On the average, about 60 per cent of the total
effort put in by the development team in the entire life cycle is spent on the maintenance activities alone.
However, among the development phases, the integration and system testing phase require the
maximum effort in a typical development project. In the following subsection, we briefly describe the
activities that are carried out in the different phases of the classical waterfall model.
Feasibility study
The focus of the feasibility study stage is to determine whether it would be financially and technically
feasible to develop the software. The feasibility study involves carrying out several activities such as collection
of basic information relating to the software such as the different data items that would be input to the system,
the processing required to be carried out on these data, the output data required to be produced by the system,
as well as various constraints on the development. These collected data are analysed to perform at the
following:
It is necessary to first develop an overall understanding of what the customer requires to be developed.
For this, only the important requirements of the customer need to be understood and the details of various
requirements such as the screen layouts required in the graphical user interface (GUI), specific formulas or
algorithms required for producing the required results, and the databases schema to be used are ignored.
Formulation of the various possible strategies for solving the problem:
In this activity, various possible high-level solution schemes to the problem is determined. For
example, solution in a client-server framework and a standalone application framework may be explored.
The different identified solution schemes are analysed to evaluate their benefits and shortcomings.
Such evaluation often requires making approximate estimates of the resources required, cost of development,
and development time required. The different solutions are compared based on the estimations that have been
worked out. Once the best solution is identified, all activities in the later phases are carried out as per this
solution. At this stage, it may also be determined that none of the solutions is feasible due to high cost,
resource constraints, or some technical reasons. This scenario would, of course, require the project to be
abandoned. We can summarise the outcome of the feasibility study phase by noting that other than deciding
whether to take up a project or not, at this stage very high-level decisions regarding the solution strategy is
defined. Therefore, feasibility study is a very crucial stage in software development. The following is a case
study of the feasibility study undertaken by an organisation. It is intended to give a feel of the activities and
issues involved in the feasibility study phase of a typical software project.
According to this scheme, each mine site would deduct SPF installments from each miner every month and
deposit the same to the central special provident fund commissioner (CSPFC). The CSPFC will maintain all details
regarding the SPF installments collected from the miners.
GMC Ltd. requested a reputed software vendor Adventure Software Inc. to undertake the task of developing
the software for automating the maintenance of SPF records of all employees. GMC Ltd. has realised that besides
saving manpower on bookkeeping work, the software would help in speedy settlement of claim cases. GMC Ltd.
indicated that the amount it can at best afford Rs. 1 million for this software to be developed and installed.
Adventure Software Inc. deputed their project manager to carry out the feasibility study. The project manager
discussed with the top managers of GMC Ltd. to get an overview of the project. He also discussed with the field PF
officers at various mine sites to determine the exact details of the project. The project manager identified two broad
approaches to solve the problem. One is to have a central database which would be accessed and updated via a
satellite connection to various mine sites. The other approach is to have local databases at each mine site and to
update the central database periodically through a dial-up connection. This periodic update can be done on a daily or
hourly basis depending on the delay acceptable to GMC Ltd. in invoking various functions of the software. He found
that the second approach is very affordable and more fault tolerant as the local mine sites can operate even when the
communication link temporarily fails. In this approach, when a link fails, only the update of the central database gets
delayed. Whereas in the first approach, all SPF work gets stalled at a mine site for the entire duration of link failure.
The project manager quickly analysed t h e overall database functionalities required, the user interface issues, and the
software handling communication with the mine sites. From this analysis, he estimated the approximate cost to
develop the software. He found that a solution involving maintaining local databases at the mine sites and periodically
updating a central database is financially and technically feasible. The project manager discussed this solution with
the president of GMC Ltd., who indicated that the proposed solution would be acceptable to them.
Requirements analysis and specification
The aim of the requirements analysis and specification phase is to understand the exact requirements of
the customer and to document them properly. This phase consists of two distinct activities, namely
requirements gathering and analysis, and requirements specification. In the following subsections, we give an
overview of these two activities:
The goal of the requirements gathering activity is to collect all relevant information regarding the
software to be developed from the customer with a view to clearly understand the requirements. For this, first
requirements are gathered from the customer and then the gathered requirements are analysed. The goal of the
requirements analysis activity is to weed out the incompleteness and inconsistencies in these gathered
requirements. Note that a n inconsistent requirement is one in which some part of the requirement contradicts
with some other part. On the other hand, a n incomplete requirement is one in which some parts of the actual
requirements have been omitted.
Requirements specification:
After the requirement gathering and analysis activities are complete, the identified requirements are
documented. This is called a software requirements specification (SRS) document. The SRS document is
written using end-user terminology. This makes the SRS document understandable to the customer. Therefore,
understandability of the SRS document is an important issue. The SRS document normally serves as a contract
between the development team and the customer. Any future dispute between the customer and the developers
can be settled by examining the SRS document. The SRS document is therefore an important document which
must be thoroughly understood by the development team and reviewed jointly with the customer. The SRS
document not only forms the basis for carrying out all the development activities, but several documents such
as users’ manuals, system test plan, etc. are prepared directly based on it. In Chapter 4, we examine the
requirements analysis activity and various issues involved in developing a good SRS document in more detail.
Design
The goal of the design phase is to transform the requirements specified in the SRS document into a
structure that is suitable for implementation in some programming language. In technical terms, during the
design phase the software architecture is derived from the SRS document. Two distinctly different design
approaches are popularly being used at present—the procedural and object-oriented design approaches. In the
following, we briefly discuss the essence of these two approaches. These two approaches are discussed in
detail in Chapters 6, 7, and 8.
The traditional design approach is in use in many software developments projects at the present time.
This traditional design technique is based on the data flow-oriented design approach. It consists of two
important activities; first structured analysis of the requirements specification is carried out where the detailed
structure of the problem is examined. This is followed by a structured design step where the results of
structured analysis are transformed into the software design.
During structured analysis, the functional requirements specified in the SRS document are decomposed
into subfunctions and the dataflow among these subfunctions is analysed and represented diagrammatically in
the form of DFDs. The DFD technique is discussed in Chapter 6. Structured design is undertaken once the
structured analysis activity is complete. Structured design consists of two main activities—architectural design
(also called high-level design) and detailed design (also called Low-level design). High-level design involves
decomposing the system i n t o modules and representing the interfaces and the invocation relationships among
the modules. A high-level software design is sometimes referred to as the software architecture. During the
detailed design activity, internals of the individual modules such as the data structures and algorithms of the
modules are designed and documented.
In this technique, various objects that occur in the problem domain and the solution domain are first
identified and the different relationships that exist among these objects are identified. The object structure is
further refined to obtain the detailed design. The OOD approach is credited to have several benefits such as
lower development time and effort, and better maintainability of the software. The object-oriented design
technique is discussed in Chapters 7 and 8.
The purpose of the coding and unit testing phase is to translate a software design into source code and
to ensure that individually each function is working correctly. The coding phase is also sometimes called the
implementation phase since the design is implemented into a workable solution in this phase. Each component
of the design is implemented as a program module. The end-product of this phase is a set of program modules
that have been individually unit tested. The main objective of unit testing is to determine the correct working
of the individual modules. The specific activities carried out during unit testing include designing test cases,
testing, debugging to fix problems, and management of test cases. We shall discuss the coding and unit testing
techniques in Chapter 10.
Integration of different modules is undertaken soon after they have been coded and unit tested. During
the integration and system testing phase, the different modules are integrated in a planned manner. Various
modules making up a software are almost never integrated in one shot (can you guess the reason for this?).
Integration of various modules are normally carried out incrementally over several steps. During each
integration step, previously planned modules are added to the partially integrated system and the resultant
system is tested. Finally, after all the modules have been successfully integrated and tested, the full working
system is obtained. System testing is carried out on this fully working system.
Integration testing is carried out to verify that the interfaces among different units are working satisfactorily.
On the other hand, the goal of system testing is to ensure that the developed system conforms to the
requirements that have been laid out in the SRS document.
The total effort spent on maintenance of a typical software during its operation phase is much more
than that required for developing the software itself. Many studies carried out in the past confirm this and
indicate that the ratio of relative effort of developing a typical software product and the total effort spent on its
maintenance is roughly 40:60. Maintenance is required in the following three types of situations:
Corrective maintenance: This type of maintenance is carried out to correct errors that were not
discovered during the product development phase.
Perfective maintenance: This type of maintenance is carried out to improve the performance of
the system, or to enhance the functionalities of the system based on customer’s requests.
Adaptive maintenance: Adaptive maintenance is usually required for porting the software to work
in a new environment. For example, porting may be required to get the software to work on a new
computer platform or with a new operating system.
Various maintenance activities have been discussed in more detail in Chapter 13.
The classical waterfall model is a very simple and intuitive model. However, it suffers from several
shortcomings. Let us identify some of the important shortcomings of the classical waterfall model:
No feedback paths: In classical waterfall model, the evolution of a software from one phase to the
next is analogous to a waterfall. Just as water in a waterfall after having flowed down cannot flow back, once a
phase is complete, the activities carried out in it and any artifacts produced in this phase are considered to be
final and are closed for any rework. This requires that all activities during a phase are flawlessly carried out.
The classical waterfall model is idealistic in the sense that it assumes that no error is ever committed by
the developers during any of the life cycle phases, and therefore, incorporates no mechanism for error
correction.
Contrary to a fundamental assumption made by the classical waterfall model, in practical development
environments, the developers do commit many errors in almost every activity they carry out during various
phases of the life cycle. After all, programmers are humans and as the old adage says to err is humane. The
cause for errors can be many—oversight, wrong interpretations, use of incorrect solution scheme,
communication gap, etc. These defects usually get detected much later in the life cycle. For example, a
design defect might go unnoticed till the coding or testing phase. Once a defect is detected later, the
developers need to redo some of the work done during that phase and redo the work of later phases that are
affected by the rework. Therefore, in any non-trivial software development project, it becomes nearly
impossible to strictly follow the classical waterfall model of software development.
Difficult to accommodate change requests: This model assumes that all customer requirements can
be completely and correctly defined at the beginning of the project. There is much emphasis on creating an
unambiguous and complete set of requirements. But it is hard to achieve this even in ideal project scenarios.
The customers’ requirements usually keep on changing with time. But, in this model it becomes difficult to
accommodate any requirement change requests made by the customer after the requirements specification
phase is complete, and this often becomes a source of customer discontent.
Inefficient error corrections: This model defers integration of code and testing tasks until it is very
late when the problems are harder to resolve.
No overlapping of phases: This model recommends that the phases be carried out sequentially new
phase can start only after the previous one completes. However, it is rarely possible to adhere to this
recommendation and it leads to many team members to idle for extended periods. For example, for efficient
utilization of manpower, the testing team might need to design the system test cases immediately after
requirements specification is complete. (We shall discuss in Chapter 10 that the system test cases are designed
solely based on the SRS document). In this case, the activities of the design and testing phases overlap.
Consequently, it is safe to say that in a practical software development scenario, rather than having a precise
point in time at which a phase transition occurs, the different phases need to overlap for cost and efficiency
reasons.
We have already pointed out that it is hard to use the classical waterfall model in real projects. In any
practical development environment, as the software takes shape, several iterations through the different
waterfall stages become necessary for correction of errors committed during various phases. Therefore, the
classical waterfall model is hardly usable for software development. But, as suggested by Parnas [1972] the
final documents for the product should be written as if the product was developed using a pure classical
waterfall.
Irrespective of the life cycle model that is followed for a product development, the final documents are
always written to reflect a classical waterfall model of development, so that comprehension of the
documents becomes easier for anyone reading the document.
The rationale behind preparation of documents based on the classical waterfall model can be explained
using Hoare’s metaphor of mathematical theorem [1994] proving—A mathematician presents a proof as a
single chain of deductions, even though the proof might have come from a convoluted set of partial attempts,
blind alleys and backtracks. Imagine how difficult it would be to understand, if a mathematician presents a
proof by retaining all the backtracking, mistake corrections, and solution refinements he made while working
out the proof.
The main change brought about by the iterative waterfall model to the classical waterfall model is in
the form of providing feedback paths from every phase to its preceding phases.
The feedback paths introduced by the iterative waterfall model are shown in Figure 2.3. The feedback
paths allow for correcting errors committed by a programmer during some phase, as and when these are
detected in a later phase. For example, if during the testing phase a design error is identified, then the
feedback path allows the design to be reworked and the changes to be reflected in the design documents and
all other subsequent documents. Please notice that in Figure 2.3 there is no feedback path to the feasibility
stage. This is because once a team having accepted to take up a project, does not give up the project easily
due to legal and moral reasons.
Figure 2.3: Iterative waterfal model.
Almost every life cycle model that we discuss are iterative in nature, except the classical waterfall
model and the V-model—which are sequential in nature. In a sequential model, once a phase is complete, no
work product of that phase is changed later.
No matter how careful a programmer may be, he might end up committing some mistake or other
while carrying out a life cycle activity. These mistakes result in errors (also called faults or bugs) in the
work product. It is advantageous to detect these errors in the same phase in which they take place, since early
detection of bugs reduces the effort and time required for correcting those. For example, if a design problem i
s detected in the design phase itself, then the problem can be taken care of much more easily than if the error
is identified, say, at the end of the testing phase. In the later case, it would be necessary not only to rework the
design, but also to appropriately redo the relevant coding as well as the testing activities, thereby incurring
higher cost. It may not always be possible to detect all the errors in the same phase in which they are made.
Nevertheless, the errors should be detected as early as possible.
The principle of detecting errors as close to their points of commitment as possible is known as
phase containment of errors.
For achieving phase containment of errors, how can the developers detect almost all error that they
commit in the same phase? After all, the end product of many phases are text or graphical documents, e.g.
SRS document, design document, test plan document, etc. A popular technique is to rigorously review the
documents produced at the end of a phase.
Phase overlap
Even though the strict waterfall model envisages sharp transitions to occur from one phase to the next
(see Figure 2.3), in practice the activities of different phases overlap (as shown in Figure 2.4) due to two main
reasons:
In spite of the best effort to detect errors in the same phase in which they are committed, some
errors escape detection and are detected in a later phase. These subsequently detected errors cause
the activities of some already completed phases to be reworked. If we consider such rework after a
phase is complete, we can say that the activities pertaining to a phase do not end at the completion
of the phase, but overlap with other phases as shown in Figure 2.4.
An important reason for phase overlap is that usually the work required to be carried out in a phase
is divided among the team members. Some members may complete their part of the work earlier
than other members. If strict phase transitions are maintained, then the team members who
complete their work early would idle waiting for the phase to be complete, and are said to be in a
blocking state. Thus the developers who complete early would idle while waiting for their team
mates to complete their assigned work. Clearly this is a cause for wastage of resources and a
source of cost escalation and inefficiency. As a result, in real projects, the phases are allowed to
overlap. That is, once a developer completes his work assignment for a phase, proceeds to start the
work for the next phase, without waiting for all his team members to complete their respective
work allocations.
Considering these situations, the effort distribution for different phases with time would be as shown
in Figure 2.4.
Figure 2.4: Distribution of effort for various phases in the iterative waterfa l model.
The iterative waterfall model is a simple and intuitive software development model. It was used
satisfactorily during 1970s and 1980s. However, the characteristics of software development projects have
changed drastically over years. In the 1970s and 1960s, software development projects spanned several
years and mostly involved generic software product development. The projects are now shorter and involve
Customised software development. Further, software was earlier developed from scratch. Now the emphasis
is on as much reuse of code and other project artifacts as possible. Waterfall-based models have worked
satisfactorily over last many years in the past. The situation has changed substantially now. As pointed out in
the first chapter several decades back, every software was developed from scratch. Now, not only software has
become very large and complex, very few (if at all any) software project is being developed from scratch. The
software services (customised software) are poised to become the dominant types of projects. In the present
software development projects, use of waterfall model causes several problems. In this context, the agile
models have been proposed about a decade back that attempt to overcome the important shortcomings of the
waterfall model by suggesting certain radical modification to the waterfall style of software development. In
Section 2.4, we discuss the agile model. Some of the glaring shortcomings of the waterfall model when used
in the present-day software development projects are as following:
Difficult to accommodate change requests:
A major problem with the waterfall model is that the requirements need to be frozen before the
development starts. Based on the frozen requirements, detailed plans are made for the activities to be carried
out during the design, coding, and testing phases. Since activities are planned for the entire duration,
substantial effort and resources are invested in the activities as developing the complete requirements
specification, design for the complete functionality and so on. Therefore, accommodating even small change
requests after the development activities are underway not only requires overhauling the plan, but also the
artifacts that have already been developed.
Once requirements have been frozen, the waterfall model provides no scope for any modifications to
the requirements.
While the waterfall model is inflexible to later changes to the requirements, evidence gathered from
several projects’ points to the fact that later changes to requirements are almost inevitable. Even f o r projects
with highly experienced professionals at all levels, as well as computer savvy customers, requirements are
often missed as well as misinterpreted. Unless change requests are encouraged, the developed functionalities
would be misfit to the true customer requirements. Requirement changes can arise due to a variety of reasons
including the following—requirements were not clear to the customer, requirements were misunderstood,
business process of the customer may have changed after the SRS document was signed off, etc. In fact,
customers get clearer understanding of their requirements only after working on a fully developed and
installed system.
The basic assumption made in the iterative waterfall model that methodical requirements gathering,
and analysis alone would comprehensively and correctly identify all the requirements by the end of the
requirements phase is flawed.
Incremental delivery not supported: In the iterative waterfall model, the full software is completely
developed and tested before it is delivered to the customer. There is no provision for any intermediate
deliveries to occur. This is problematic because the complete application may take several months or years
to be completed and delivered to the customer. By the time the software is delivered, installed, and
becomes ready for use, the customer’s business process might have changed substantially. This makes the
developed application a poor fit to the customer’s requirements.
Phases overlap not supported: For most real-life projects, it becomes difficult to follow the rigid phase
sequence prescribed by the waterfall model. By the term a rigid phase sequence, we mean that a phase can
start only after the previous phase is complete in all respects. As already discussed, strict adherence to the
waterfall model creates blocking states. The waterfall model is usually adapted for use in real-life projects
by allowing overlapping of various phases as shown in Figure 2.4.
Error correction unduly expensive: In waterfall model, validation is delayed till the complete
development of the software. As a result, the defects that are noticed at the time of validation incur
expensive rework and result in cost escalation and delayed delivery.
Limited customer interactions: This model supports very limited customer interactions. It is generally
accepted that software developed in isolation from the customer is the cause of many problems. In fact,
interactions occur only at the start of the project and at project completion. As a result, the developed
software usually turns out to be a misfit to the customer’s actual requirements.
Heavy weight: The waterfall model over emphasises documentation. A significant portion of the time of the
developers is spent in preparing documents and revising them as changes occur over the life cycle. Heavy
documentation though useful during maintenance and for carrying out review, is a source of team
inefficiency.
No support for risk handling and code reuse: It becomes difficult to use the waterfall model in projects
that are susceptible to various types of risks, or those involving significant reuse of existing development
artifacts. Please recollect that software services types of projects usually involve significant reuse.
2.1.2 V-Model
A popular development process model, V-model is a variant of the waterfall model. As is the case
with the waterfall model, this model gets its name from its visual appearance (see Figure 2.5). In this model
verification and validation activities are carried out throughout the development life cycle, and therefore the
chances bugs in the work products considerably reduce. This model is therefore generally considered to be
suitable for use in projects concerned with development of safety-critical software that are required to have
high reliability.
As shown in Figure 2.5, there are two main phases—development and validation phases. The left half
of the model comprises the development phases and the right half comprises the validation phases.
In each development phase, along with the development of a work product, test case design and the
plan for testing the work product are carried out, whereas the actual testing is carried out in the
validation phase. This validation plan created during the development phases is carried out in the
corresponding validation phase which have been shown by dotted arcs in Figure 2.5.
In the validation phase, testing is carried out in three steps—unit, integration, and system testing.
The purpose of these three different steps of testing during the validation phase is to detect defects
that arise in the corresponding phases of software development—requirements analysis and
specification, design, and coding respectively.
We have already pointed out that the V-model can be considered to be an extension of the waterfall
model. However, there are major differences between the two. As already mentioned, in contrast to the
iterative waterfall model where testing activities are confined to the testing phase only, in the V-model testing
activities are spread over the entire life cycle. As shown in Figure 2.5, during the requirements specification
phase, the system test suite design activity takes place. During the design phase, the integration test cases are
designed. During coding, the unit test cases are designed. Thus, we can say that in this model, development
and validation activities proceed hand in hand.
Advantages of V-model
The important advantages of the V-model over the iterative waterfall model are as following:
In the V-model, much of the testing activities (test case design, test planning, etc.) are carried out
in parallel with the development activities. Therefore, before testing phase starts significant part
of the testing activities, including test case design and test planning, is already complete.
Therefore, this model usually leads to a shorter testing phase and an overall faster product
development as compared to the iterative model.
Since test cases are designed when the schedule pressure has not built up, the quality of the test
cases is usually better.
The test team is reasonably kept occupied throughout the development cycle in contrast to the
waterfall model where the testers are active only during the testing phase. This leads to more
efficient manpower utilisation.
In the V-model, the test team is associated with the project from the beginning. Therefore, they
build up a good understanding of the development artifacts, and this in turn, helps them to carry
out effective testing of the software. In contrast, in the waterfall model often the test team comes
on board late in the development cycle since no testing activities are carried out before the start of
the implementation and testing phase.
Disadvantages of V-model
Being a derivative of the classical waterfall model, this model inherits most of the weaknesses of the
waterfall model.
The prototyping model is advantageous to use for specific types of projects. In the following, we
identify three types of projects for which the prototyping model can be followed to advantage:
It is advantageous to use the prototyping model for development of the graphical user interface
(GUI) part of an application. Using a prototype, it becomes easier to illustrate the input data
formats, messages, reports, and the interactive dialogs to the customer. This is a valuable
mechanism for gaining better understanding of the customers’ needs. In this regard, the prototype
model turns out to be especially useful in developing the graphical user interface (GUI) part of a
system. For the user, it becomes much easier to form an opinion regarding what would be more
suitable by experimenting with a working user interface, rather than trying to imagine the working
of a hypothetical user interface.
The GUI part of a software system is almost always developed using the prototyping model.
The prototyping model is especially useful when the exact technical solutions are unclear to the
development team. A prototype can help them to critically examine the technical issues associated
with product development. For example, consider a situation where the development team has to
write a command language interpreter as part of a graphical user interface development. Suppose
none of the team members has ever written a compiler before. Then, this lack of familiarity with a
required development technology is a technical risk. This risk can be resolved by developing a
prototype compiler for a very small language to understand the issues associated with writing a
compiler for a command language. Once they feel confident in writing compiler for the small
language, they can use this knowledge to develop the compiler for the command language. Often,
major design decisions depend on issues such as the response time of a hardware controller, or the
efficiency of a sorting algorithm, etc. In such circumstances, a prototype is often the best way to
resolve the technical issues.
An important reason for developing a prototype is that it is impossible to “get it right” the first
time. As advocated by Brooks [1975], one must plan to throw away the software in order to
develop a good software later. Thus, the prototyping model can be deployed when development of
highly optimized and efficient software is required.
The prototyping model of software development is graphically shown in Figure 2.6. As shown in
Figure 2.6, software is developed through two major activities—prototype construction and iterative waterfall-
based software development.
Prototype development: Prototype development starts with an initial requirement gathering phase. A
quick design is carried out and a prototype is built. The developed prototype is submitted to the customer for
evaluation. Based on the customer feedback, the requirements are refined, and the prototype is suitably
modified. This cycle of obtaining customer feedback and modifying the prototype continues till the customer
approves the prototype.
Iterative development: Once the customer approves the prototype, the actual software is developed
using the iterative waterfall approach. In spite of the availability of a working prototype, the SRS document is
usually needed to be developed since the SRS document is invaluable for carrying out traceability analysis,
verification, and test case design during later phases. However, for GUI parts, the requirements analysis and
specification phase become redundant since the working prototype that has been approved by the customer
serves as an animated requirements specification.
The code for the prototype is usually thrown away. However, the experience gathered from developing
the prototype helps a great deal in developing the actual system.
By constructing the prototype and submitting it for user evaluation, many customer requirements get
properly defined and technical issues get resolved by experimenting with the prototype. This minimises later
change requests from the customer and the associated redesign costs.
This model is the most appropriate for projects that suffer from technical and requirements risks. A
constructed prototype helps overcome these risks.
The prototype model can increase the cost of development for projects that are routine development
work and do not suffer from any significant risks. Even when a project is susceptible to risks, the prototyping
model is effective only for those projects for which the risks can be identified upfront before the development
starts. Since the prototype is constructed only at the start of the project, the prototyping model is ineffective
for risks identified later during the development cycle. The prototyping model would not be appropriate for
projects for which the risks can only be identified after the development is underway.
In the incremental life cycle model, the requirements of the software are first broken down into several
modules or features that can be incrementally constructed and delivered. This has been pictorially depicted i n
Figure 2.7. At any time, plan is made only for the next increment and no long-term plans a re made. Therefore,
it becomes easier to accommodate change requests from the customers.
The development team first undertakes to develop the core features of the system. The core or basic
features are those that do not need to invoke any services from the other features. On the other hand, non-core
features need services from the core features. Once the initial core features are developed, these are refined
into increasing levels of capability by adding new functionalities in successive versions. Each incremental
version is usually developed using an iterative waterfall model of development. The incremental model is
schematically shown in Figure 2.8. As each successive version of the software is constructed and delivered to
the customer, the customer feedback is obtained on the delivered version and these feedbacks are incorporated
in the next version. Each delivered version of the software incorporates additional features over the previous
version and refines the features that were already delivered to the customer.
The incremental model has schematically been shown in Figure 2.8. After the requirements gathering
and specification, the requirements are split into several versions. Starting with the core (version 1), in each
successive increment, the next version is constructed using an iterative waterfall model of development and
deployed at the customer site. After the last (shown as version n) has been developed and deployed at the
client site, the full software is deployed.
Advantages
The incremental development model offers several advantages. Two important ones are the
following:
Error reduction: The core modules are used by the customer from the beginning
and therefore these get tested thoroughly. This reduces chances of errors in the core
modules of the final product, leading to greater reliability of the software.
Incremental resource deployment: This model obviates the need for the customer
to commit large resources at one go for development of the system. It also saves the
developing organisation from deploying large resources and manpower for a project
in one go.
Though the evolutionary model can also be viewed as an extension of the waterfall
model, but it incorporates a major paradigm shift that has been widely adopted in many recent
life cycle models. Due to obvious reasons, the evolutionary software development process is
sometimes referred to as design a little, build a little, test a little, deploy a little model. This
means that after the requirements have been specified, the design, build, test, and deployment
activities are iterated. A schematic representation of the evolutionary model of development
has been shown in Figure 2.9.
Advantages
The evolutionary model of development has several advantages. Two important
advantages of using this model are the following:
Effective elicitation of actual customer requirements: In this model, the user gets
a chance to experiment with a partially developed software much before the
complete requirements are developed. Therefore, the evolutionary model helps to
accurately elicit user requirements with the help of feedback obtained on the
delivery of different versions of the software. As a result, the change requests after
delivery of the complete software gets substantially reduced.
Easy handling changes requests: In this model, handling change requests is easier
as no long term plans are made. Consequently, reworks required due to change
requests are normally much smaller compared to the sequential models.
Disadvantages
The main disadvantages of the successive versions model are as follows:
Feature division into incremental parts can be non-trivial: For many development
projects, especially for small-sized projects, it is difficult to divide the required
features into several parts that can be incrementally implemented and delivered.
Further, even for larger problems, often the features are so intertwined and dependent
on each other that even an expert would need considerable effort to plan the
incremental deliveries.
Ad hoc design: Since at a time design for only the current increment is done, the
design can become ad hoc without specific attention being paid to maintainability and
optimality. Obviously, for moderate sized problems and for those for which the
customer requirements are clear, the iterative waterfall model can yield a better
solution.
Applicability of the evolutionary model
The evolutionary model is normally useful for very large products, where it is easier t o
find modules for incremental implementation. Often evolutionary model is used when the
customer prefers to receive the product in increments so that he can start using the different
features as and when they are delivered rather than waiting all the time for the full product to be
developed and delivered. Another important category of projects for which the evolutionary
model is suitable, is projects using object-oriented development.
In this model prototypes are constructed, and incrementally the features are developed
and delivered to the customer. But unlike the prototyping model, the prototypes are not thrown
away but are enhanced and used in the software construction
To decrease the time taken and the cost incurred to develop software systems.
To limit the costs of accommodating change requests.
To reduce the communication gap between the customer and the developers.
Main motivation
In the RAD model, development takes place in a series of short cycles or iterations. At any
time, the development team focuses on the present iteration only, and therefore plans are made for
one increment at a time. The time planned for each iteration is called a time box. Each iteration is
planned to enhance the implemented functionality of the application by only a small amount. During
each time box, a quick-and-dirty prototype-style software for some functionality is developed. The
customer evaluates the prototype and gives feedback on the specific improvements that may be
necessary. The prototype is refined based on the customer feedback. Please note that the prototype is
not meant to be released to the customer for regular use though.
The development team almost always includes a customer representative to clarify the
requirements. This is intended to make the system tuned to the exact customer requirements
and also to bridge the communication gap between the customer and the development team. The
development team usually consists of about five to six members, including a customer
representative.
The customers usually suggest changes to a specific feature only after they have used it.
Since the features are delivered in small increments, the customers are able to give their change
requests pertaining to a feature already delivered. Incorporation of such change requests just after
the delivery of an incremental feature saves cost as this is carried out before large investments
have been made in development and testing of a large number of features.
The decrease in development time and cost, and at the same time an increased flexibility
to incorporate changes are achieved in the RAD model in two main ways—minimal use of
planning and heavy reuse of any existing code through rapid prototyping. The lack of long-term
and detailed planning gives the flexibility to accommodate later requirements changes. Reuse of
existing code has been adopted as an important mechanism of reducing the development cost.
RAD model emphasises code reuse as an important means for completing a project
faster. In fact, the adopters of the RAD model were the earliest to embrace object-oriented
languages and practices. Further, RAD advocates use of specialised tools to facilitate fast
creation of working prototypes. These specialised tools usually support the following features:
In the prototyping model, the developed prototype is primarily used by the development
team to gain insights into the problem, choose between alternatives, and elicit customer
feedback. The code developed during prototype construction is usually thrown away. In contrast,
in RAD it is the developed prototype that evolves into the deliverable software.
Though RAD is expected to lead to faster software development compared to the traditional models (such
as the prototyping model), though the quality and reliability would be inferior.
In the iterative waterfall model, all the functionalities of a software are developed
together. On the other hand, in the RAD model the product functionalities are developed
incrementally through heavy code and design reuse. Further, in the RAD model customer
feedback is obtained on the developed prototype after each iteration and based on this the
prototype is refined. Thus, it becomes easy to accommodate any request for requirements
changes. However, the iterative waterfall model does not support any mechanism to
accommodate any requirement change requests. The iterative waterfall model does have some
important advantages that include the following. Use of the iterative waterfall model leads to
production of good quality documentation which can help during software maintenance. Also,
the developed software usually has better quality and reliability than that developed using
RAD.
Over the last two decades or so, projects using iterative waterfall-based life cycle models are becoming rare
due to the rapid shift in the characteristics of the software development projects over time. Two changes that
are becoming noticeable are rapid shift from development of software products to development of
customised software and the increased emphasis and scope for reuse.
In the following, a few reasons why the waterfall-based development was becoming
difficult to use in project in recent times:
The agile software development model was proposed in the mid-1990s to overcome the
serious shortcomings of the waterfall model of development identified above. The agile model
was primarily designed to help a project to adapt to change requests quickly. 1Thus, a major aim
of the agile models is to facilitate quick project completion. But how is agility achieved in these
models? Agility is achieved by fitting the process to the project, i.e. removing activities that
may not be necessary for a specific project. Also, anything that that wastes time and effort is
avoided.
Please note that agile model is being used as an umbrella term to refer to a group of
development processes. These processes share certain common characteristics but do have
certain subtle differences among themselves. A few popular agile SDLC models are the
following:
Crystal
Atern (formerly DSDM)
Feature-driven
development Scrum
Extreme programming
(XP) Lean development
Unified process
In the agile model, the requirements are decomposed into many small parts that can be
incrementally developed. The agile model adopts an iterative approach. Each incremental part
is developed over an iteration. Each iteration is intended to be small and easily manageable
and lasting fo r a couple of weeks only. At a time, only one increment is planned, developed,
and then deployed at the customer site. No long-term plans are made. The time to complete an
iteration is called a time box. The implication of the term time box is that the end date for an
iteration does not change. That is, the delivery date is considered sacrosanct. The development
team can, however, decide to reduce the delivered functionality during a time box if necessary.
A central principle of the agile model is the delivery of an increment to the customer after
each time box. A few other principles that are central to the agile model are discussed below.
For establishing close contact with the customer during development and to gain a clear
understanding of the domain-specific issues, each agile project usually includes a customer
representative in the team. At the end of each iteration, stakeholders and the customer
representative review the progress made and re-evaluate the requirements. A distinguishing
characteristic of the agile models is frequent delivery of software increments to the customer.
Agile model emphasize face-to-face communication over written documents. It is
recommended that the development team size be deliberately kept small (5–9 people) to help
the team members meaningfully engage in face-to-face communication and have collaborative
work environment. It is implicit then that the agile model is suited to the development of small
projects. However, if a large project is required to be developed using the agile model, it is
likely that the collaborating teams might work at different locations. In this case, the different
teams are needed to maintain as much daily contact as possible through video conferencing,
telephone, e-mail, etc.
The agile model emphasises incremental release of working software as the primary measure of progress.
The following important principles behind the agile model were publicized in the agile
manifesto in 2001:
In pair programming, two programmers work together at one workstation. One types in code while the
other reviews the code as it is typed in. The two programmers switch their roles every hour or so.
Several studies indicate that programmers working in pairs produce compact well-
written programs and commit fewer errors as compared to programmers working alone.
The agile methods derive much of their agility by relying on the tacit knowledge of the
team members about the development project and informal communications to clarify issues,
rather than spending significant amounts of time in preparing formal documents and reviewing
them. Though this eliminates some overhead, but lack of adequate documentation may lead to
several types of problems, which are as follows:
Lack of formal documents leaves scope for confusion and important decisions taken
during different phases can be misinterpreted at later points of time by different team
members.
In the absence of any formal documents, it becomes difficult to get important project
decisions such as design decisions to be reviewed by external experts.
When the project completes and the developers disperse, maintenance can become a
problem.
In the following subsections, we compare the characteristics of the agile model with other
models of development.
The waterfall model is highly structured and systematically steps through requirements-
capture, analysis, specification, design, coding, and testing stages in a planned sequence.
Progress is generally measured in terms of the number of completed and reviewed artifacts such
as requirement specifications, design documents, test plans, code reviews, etc. for which review
is complete. In contrast, while using an agile model, progress is measured in terms of the
developed and delivered functionalities. In agile model, delivery of working versions of a
software is made in several increments. However, as regards to similarity it can be said that
agile teams use the waterfall model on a small scale, repeating the entire waterfall cycle in
every iteration.
Though a few similarities do exist between the agile and exploratory programming styles,
there are vast differences between the two as well. Agile development model’s frequent re-
evaluation of plans, emphasis on face-to-face communication, and relatively sparse use of
documentation are like that of the exploratory style. Agile teams, however, do follow defined
and disciplined processes and carry out systematic requirements capture, rigorous designs,
compared to chaotic coding in exploratory programming.
The important differences between the agile and the RAD models are the following:
Agile model does not recommend developing prototypes, but emphasises systematic
development of each incremental feature. In contrast, the central theme of RAD is
based on designing quick-and- dirty prototypes, which are then refined into
production quality code.
Agile projects logically break down the solution into features that are incrementally
developed and delivered. The RAD approach does not recommend this. Instead,
developers using the RAD model focus on developing all the features of an
application by first doing it badly and then successively improving the code over
time.
Agile teams only demonstrate completed work to the customer. In contrast, RAD
teams demonstrate to customers screen mockups, and prototypes, that may be based
on simplifications such as table look-ups rather than actual computations.
Extreme programming (XP) is an important process model under the agile umbrella and
was proposed by Kent Beck in 1999. The name of this model reflects the fact that it recommends
taking t hes e best practices that have worked well in the past in program development projects
to extreme levels. This model is based on a rather simple philosophy:” If something is known to
be beneficial, why not put it to constant use?” Based on this principle, it puts forward several
key practices that need to be practiced to the extreme. Please note that most of the key practices
that it emphasises were already recognized as good practices for some time.
In the following subsections, we mention some of the good practices that have been
recognized in the extreme programming model and the suggested way to maximize their use:
Code review: It is good since it helps detect and correct problems most efficiently. It
suggests pair programming as the way to achieve continuous review. In pair programming,
coding is carried out by pairs of programmers. The programmers take turn in writing programs
and while one writes the other reviews code that is being written.
Testing: Testing code helps to remove bugs and improves its reliability. XP suggests
test-driven development (TDD) to continually write and execute test cases. In the TDD approach,
test cases are written even before any code is written.
Simplicity: Simplicity makes it easier to develop good quality code, as well as to test and
debug it. Therefore, one should try to create the simplest code that makes the basic functionality
being written to work. For creating the simplest code, one can ignore the aspects such as
efficiency, reliability, maintainability, etc. Once the simplest thing works, other aspects can be
introduced through refactoring.
Design: Since having a good quality design is important to develop a good quality
solution, everybody should design daily. This can be achieved through refactoring, whereby a
working code is improved for efficiency and maintainability.
Integration testing: It is important since it helps identify the bugs at the interfaces of
different functionalities. To this end, extreme programming suggests that the developers should
achieve continuous integration, by building and performing integration testing several times a
day.
A user story is a simplistic statement of a customer about a functionality he needs, it does not mention
about finer details such as the different scenarios that can occur, the precondition (state at which the system)
to be satisfied before the feature can be invoked, etc.
Based on user stories, the project team proposes “metaphors”—a common vision of how
the system would work. The development team may decide to construct a spike for some feature.
A spike is a very simple program that is constructed to explore the suitability of a solution being
proposed. A spike can be like a prototype.
Coding: XP argues that code is the crucial part of any system development process, since
without code it is not possible to have a working system. Therefore, utmost care and attention
need to be placed on coding activity. However, the concept of code as used in XP has a slightly
different meaning from what is traditionally understood. For example, coding activity includes
drawing diagrams (modelling) that will be transformed to code, scripting a web-based system,
and choosing among several alternative solutions.
Testing: XP places high importance on testing and considers it be the primary means for
developing a fault-free software.
Listening: The developers need to carefully listen to the customers if they have to
develop a good quality software. Programmers may not necessarily be having an in-depth
knowledge of the the specific domain of the system under development. On the other hand,
customers usually have this domain knowledge. Therefore, for the programmers to properly
understand what the functionality of the system should be, they have to listen to the customer.
Designing: Without proper design, a system implementation becomes too complex and
the dependencies within the system become too numerous and it becomes very difficult to
comprehend the solution, and thereby making maintenance prohibitively expensive. A good
design should result in elimination of complex dependencies within a system. Thus, effective
use of a suitable design technique is emphasized.
Feedback: It espouses the wisdom: “A system staying out of users is trouble waiting to
happen”. It recognizes the importance of user feedback in understanding the exact customer
requirements. The time that elapses between the development of a version and collection of
feedback on it is critical to learning and making changes. It argues that frequent contact with the
customer makes the development effective.
XP is in favour of making the solution to a problem as simple as possible. In contrast, the traditional
system development methods recommend planning for reusability and future extensibility of code and
design at the expense of higher code and design complexity.
The following are some of the project characteristics that indicate the suitability of a
project for development using extreme programming model:
Projects involving new technology or research pro jects: In this case, the requirements
change rapidly, and unforeseen technical problems need to be resolved.
Small projects: Extreme programming was proposed in the context of small teams as
face to face meeting is easier to achieve.
The following are some of the project characteristics that indicate unsuitability of agile
development model for use in a development project:
In the scrum model, a project is divided into small parts of work that can be
incrementally developed and delivered over time boxes that are called sprints. The software
therefore gets developed over a series of manageable chunks. Each sprint typically takes only a
couple of weeks to complete. At the end of each sprint, stakeholders and team members meet to
assess the progress made and the stakeholders suggest to the development team any changes
needed to features that have already been developed and any overall improvements that they
might feel necessary.
In the scrum model, the team members assume three fundamental roles— software
owner, scrum master, and team member. The software owner is responsible for communicating
the customers vision of the software to the development team. The scrum master acts as a liaison
between the software owner and the team, thereby facilitating the development work.
While the prototyping model does provide explicit support for risk handling, the risks are
assumed to have been identified completely before the project start. This is required since the
prototype is constructed only at the start of the project. In contrast, in the spiral model prototypes
are built at the start of every phase. Each phase of the model is represented as a loop in its
diagrammatic representation. Over each loop, one or more features of the product are elaborated
and analysed and the risks at that point of time are identified and are resolved through
prototyping. Based on this, the identified features are implemented.
Figure 2.10: Spiral model of software development.
A risk is essentially any adverse circumstance that might hamper the successful completion of
a software project. As an example, consider a project for which a risk can be that data access
from a remote database might be too slow to be acceptable by the customer. This risk can be
resolved by building a prototype of the data access subsystem and experimenting with the exact
access rate. If the data access rate is too slow, possibly a caching scheme can be implemented,
or a faster communication scheme can be deployed to overcome the slow data access rate. Such
risk resolutions are easier done by using a prototype as the pros and cons of an alternate solution
scheme can evaluated faster and inexpensively, as compared to experimenting using the actual
software application being developed. The spiral model supports coping up with risks by
providing the scope to build a prototype at every phase of software development.
Each phase in this model is split into four sectors (or quadrants) as shown in Figure 2.10.
In the first quadrant, a few features of the software are identified to be taken up for immediate
development based on how crucial it is to the overall software development. With each iteration
around the spiral (beginning at the center and moving outwards), progressively more complete
versions of the software get built. In other words, implementation of the identified features forms
a phase.
Quadrant 1: The objectives are investigated, elaborated, and analysed. Based on this, the
risks involved in meeting the phase objectives are identified. In this quadrant, alternative
solutions possible for the phase under consideration are proposed.
Quadrant 2: During the second quadrant, the alternative solutions are evaluated to select
the best possible solution. To be able to do this, the solutions are evaluated by developing an
appropriate prototype.
Quadrant 3: Activities during the third quadrant consist of developing and verifying the
next level of the software. At the end of the third quadrant, the identified features have been
implemented and the next version of the software is available.
Quadrant 4: Activities during the fourth quadrant concern reviewing the results of the
stages traversed so far (i.e. the developed version of the software) with the customer and
planning the next iteration of the spiral.
The radius of the spiral at any point represents the cost incurred in the project so far, and
the angular dimension represents the progress made so far in the current phase.
In the spiral model of development, the project manager dynamically determines the
number of phases as the project progresses. Therefore, in this model, the project manager plays
the crucial role of tuning the model to specific projects.
To make the model more efficient, the different features of the software that can be
developed simultaneously through parallel cycles are identified. To keep our discussion simple,
we shall not delve into parallel cycles in the spiral model.
There are a few disadvantages of the spiral model that restrict its use to a only a few
types of projects. To the developers of a project, the spiral model usually appears as a complex
model to follow, since it is risk- driven and is more complicated phase structure than the other
models we discussed. It would therefore be counterproductive to use unless there are
knowledgeable and experienced staff in the project. Also, it is not very suitable for use in the
development of outsourced projects since the software risks need to be continually assessed as it
is developed.
Despite the disadvantages of the spiral model that we pointed out, for certain categories
of projects, the advantages of the spiral model can outweigh its disadvantages.
For projects having many unknown risks that might show up as the development proceeds, the spiral
model would be the most appropriate development model to follow.
In this regard, it is much more powerful than the prototyping model. Prototyping model
can meaningfully be used when all the risks associated with a project are known beforehand. All
these risks are resolved by building a prototype before the actual software development starts.
Spiral model as a meta model
As compared to the previously discussed models, the spiral model can be viewed as a
meta model, since it subsumes all the discussed models. For example, a single loop spiral
represents the waterfall model. The spiral model uses the approach of the prototyping model by
first building a prototype in each phase before the actual development starts. This prototype is
used as a risk reduction mechanism. The spiral model incorporates the systematic step- wise
approach of the waterfall model. Also, the spiral model can be considered as supporting the
evolutionary model—the iterations along the spiral can be considered as evolutionary levels
through which the complete system is built. This enables the developer to understand and
resolve the risks at each evolutionary level (i.e. iteration along the spiral).
The iterative waterfall model is probably the most widely used software development
model so far. This model is simple to understand and use. However, this model is suitable only
for well-understood problems, and is not suitable for development of very large projects and
projects that suffer from large number of risks.
The prototyping model is suitable for projects for which either the user requirements or
the underlying technical aspects are not well understood, however all the risks can be identified
before the project starts. This model is especially popular for development of the user interface
part of projects.
The evolutionary approach is suitable for large problems which can be decomposed into a
set of modules for incremental development and delivery. This model is also used widely for
object-oriented development projects. Of course, this model can only be used if incremental
delivery of the system is acceptable to the customer.
The spiral model is considered a meta model and encompasses all other life cycle models.
Flexibility and risk handling are inherently built into this model. The spiral model is suitable for
development of technically challenging and large software that are prone to several kinds of risks
that are difficult to anticipate at the start of the project. However, this model is much more
complex than the other models—this is probably a factor deterring its use in ordinary projects.
Let us now compare the prototyping model with the spiral model. The prototyping model
can be used if the risks are few and can be determined at the start of the project. The spiral
model, on the other hand, is useful when the risks are difficult to anticipate at the beginning of
the project but are likely to crop up as the development proceeds.
Let us compare the different life cycle models from the viewpoint of the customer.
Initially, customer confidence is usually high on the development team irrespective of the
development model followed. During the lengthy development process, customer confidence
normally drops off, as no working software is yet visible. Developers answer customer queries
using technical slang, and delays are announced. This gives rise to customer resentment. On the
other hand, an evolutionary approach lets the customer experiment with a working software
much earlier than the monolithic approaches. Another important advantage of the incremental
model is that it reduces the customer’s trauma of getting used to an entirely new system. The
gradual introduction of the software via incremental phases provides time to the customer to
adjust to the new software. Also, from the customer’s financial viewpoint, incremental
development does not require a large upfront capital outlay. The customer can order the
incremental versions as and when he can afford them.
We have discussed the advantages and disadvantages of the various life cycle models.
However, how to select a suitable life cycle model for a specific project? The answer to this
question would depend on several factors. A suitable life cycle model can possibly be selected
based on an analysis of issues such as the following:
Characteristics of the software to be developed: The choice of the life cycle model to
a large extent depends on the nature of the software that is being developed. For small services
projects, the agile model is favored. On the other hand, for product and embedded software
development, the iterative waterfall model can be preferred. An evolutionary model is a
suitable model for object-oriented development projects.
Characteristics of the customer: If the customer is not quite familiar with computers,
then the requirements are likely to change frequently as it would be difficult to form complete,
consistent, and unambiguous requirements. Thus, a prototyping model may be necessary to
reduce later change requests from the customers.