Software Engineering
Software Engineering
Software Engineering
3. Programs are developed by individuals that means single user where as Software Product are
developed by large no of users.
4. In program, there is no documentation or lack in proper documentation.
In Software Product, Proper documentation and well documented and user manual prepared.
5. Development of program is Unplanned, not Systematic etc but Development of Software
Product is well Systematic, organised, planned approach.
6. Programs provide Limited functionality and less features where as Software Products
provides more functionality as they are big in size (lines of codes) more options and features.
Classification of Software
The software is used extensively in several domains including hospitals, banks, schools, defence,
finance, stock markets and so on. It can be categorized into different types:
1. System Software –
System Software is necessary to manage the computer resources and support the execution
of application programs. Software like operating systems, compilers, editors and drivers etc.,
come under this category. A computer cannot function without the presence of these.
Operating systems are needed to link the machine dependent needs of a program with the
capabilities of the machine on which it runs. Compilers translate programs from high-level
language to machine language.
2. Networking and Web Applications Software –
Networking Software provides the required support necessary for computers to interact
with each other and with data storage facilities. The networking software is also used when
software is running on a network of computers (such as World Wide Web). It includes all
network management software, server software, security and encryption software and
software to develop web-based applications like HTML, PHP, XML, etc.
3. Embedded Software –
This type of software is embedded into the hardware normally in the Read Only Memory
(ROM) as a part of a large system and is used to support certain functionality under the
control conditions. Examples are software used in instrumentation and control applications,
washing machines, satellites, microwaves, washing machines etc.
4. Reservation Software –
A Reservation system is primarily used to store and retrieve information and perform
transactions related to air travel, car rental, hotels, or other activities. They also provide
access to bus and railway reservations, although these are not always integrated with the
main system. These are also used to relay computerized information for users in the hotel
industry, making a reservation and ensuring that the hotel is not overbooked.
5. Business Software –
This category of software is used to support the business applications and is the most widely
3|Page
used category of software. Examples are software for inventory management, accounts,
banking, hospitals, schools, stock markets, etc.
6. Entertainment Software –
Education and entertainment software provides a powerful tool for educational agencies,
especially those that deal with educating young children. There is a wide range of
entertainment software such as computer games, educational games, translation software,
mapping software, etc.
7. Artificial Intelligence Software –
Software like expert systems, decision support systems, pattern recognition software,
artificial neural networks, etc. come under this category. They involve complex problems
which are not affected by complex computations using non-numerical algorithms.
8. Scientific Software –
Scientific and engineering software satisfies the needs of a scientific or engineering user to
perform enterprise specific tasks. Such software is written for specific applications using
principles, techniques and formulae specific to that field. Examples are software like
MATLAB, AUTOCAD, PSPICE, ORCAD, etc.
9. Utilities Software –
The programs coming under this category perform specific tasks and are different from
other software in terms of size, cost and complexity. Examples are anti-virus software, voice
recognition software, compression programs, etc.
10. Document Management Software –
A Document Management Software is used to track, manage and store documents in order to
reduce the paperwork. Such systems are capable of keeping a record of the various versions
created and modified by different users (history tracking). They commonly provide storage,
versioning, metadata, security, as well as indexing and retrieval capabilities.
1. Commercial –
It represents the majority of software which we purchase from software companies,
commercial computer stores, etc. In this case, when a user buys a software, they acquire a
license key to use it. Users are not allowed to make the copies of the software. The copyright
of the program is owned by the company.
2. Shareware –
Shareware software is also covered under copyright but the purchasers are allowed to make
and distribute copies with the condition that after testing the software, if the purchaser
adopts it for use, then they must pay for it.
In both of the above types of software, changes to software are not allowed.
3. Freeware –
In general, according to freeware software licenses, copies of the software can be made both
for archival and distribution purposes but here, distribution cannot be for making a profit.
Derivative works and modifications to the software are allowed and encouraged. Decompiling
of the program code is also allowed without the explicit permission of the copyright holder.
4|Page
4. Public Domain –
In case of public domain software, the original copyright holder explicitly relinquishes all
rights to the software. Hence software copies can be made both for archival and distribution
purposes with no restrictions on distribution. Modifications to the software and reverse
engineering are also allowed.
Classical waterfall model is the basic software development life cycle model. It is very simple but
idealistic. Earlier this model was very popular but nowadays it is not used. But it is very important
because all the other software development life cycle models are based on the classical waterfall
model.
Classical waterfall model divides the life cycle into a set of phases. This model considers that one
phase can be started after completion of the previous phase. That is the output of one phase will
be the input to the next phase. Thus the development process can be considered as a sequential
flow in the waterfall. Here the phases do not overlap with each other. The different sequential
phases of the classical waterfall model are shown in the below figure:
5|Page
1. Feasibility Study: The main goal of this phase is to determine whether it would be financially
and technically feasible to develop the software.
The feasibility study involves understanding the problem and then determine the various
possible strategies to solve the problem. These different identified solutions are analyzed
based on their benefits and drawbacks, The best solution is chosen and all the other phases
are carried out as per this solution strategy.
2. Requirements analysis and specification: The aim of the requirement analysis and
specification phase is to understand the exact requirements of the customer and document
them properly. This phase consists of two different activities.
Requirement gathering and analysis: Firstly all the requirements regarding the
software are gathered from the customer and then the gathered requirements are
analyzed. The goal of the analysis part is to remove incompleteness (an incomplete
requirement is one in which some parts of the actual requirements have been omitted)
and inconsistencies (inconsistent requirement is one in which some part of the
requirement contradicts with some other part).
Requirement specification: These analyzed requirements are documented in a software
requirement specification (SRS) document. SRS document serves as a contract between
development team and customers. Any future dispute between the customers and the
developers can be settled by examining the SRS document.
3. Design: The aim of the design phase is to transform the requirements specified in the SRS
document into a structure that is suitable for implementation in some programming language.
4. Coding and Unit testing: In coding phase software design is translated into source code using
any suitable programming language. Thus each designed module is coded. The aim of the unit
testing phase is to check whether each module is working properly or not.
5. Integration and System testing: Integration of different modules are undertaken soon
after they have been coded and unit tested. Integration of various modules is carried out
incrementally over a number of steps. During each integration step, previously planned
modules are added to the partially integrated system and the resultant system is tested.
Finally, after all the modules have been successfully integrated and tested, the full working
system is obtained and system testing is carried out on this.
System testing consists three different kinds of testing activities as described below :
Alpha testing: Alpha testing is the system testing performed by the development team.
Beta testing: Beta testing is the system testing performed by a friendly set of
customers.
Acceptance testing: After the software has been delivered, the customer performed
the acceptance testing to determine whether to accept the delivered software or to
reject it.
6. Maintainence: Maintenance is the most important phase of a software life cycle. The effort
spent on maintenance is the 60% of the total effort spent to develop a full software. There
are basically three types of maintenance :
Corrective Maintenance: This type of maintenance is carried out to correct errors that
were not discovered during the product development phase.
6|Page
No feedback path: In classical waterfall model evolution of a software from one phase to
another phase is like a waterfall. It assumes that no error is ever committed by developers
during any phases. Therefore, it does not incorporate any mechanism for error correction.
Difficult to accommodate change requests: This model assumes that all the customer
requirements can be completely and correctly defined at the beginning of the project, but
actually customers’ requirements keep on changing with time. It is difficult to accommodate
any change requests after the requirements specification phase is complete.
No overlapping of phases: This model recommends that new phase can start only after the
completion of the previous phase. But in real projects, this can’t be maintained. To increase
the efficiency and reduce the cost, phases may overlap.
In a practical software development project, the classical waterfall model is hard to use. So,
Iterative waterfall model can be thought of as incorporating the necessary changes to the
classical waterfall model to make it usable in practical software development projects. It is almost
same as the classical waterfall model except some changes are made to increase the efficiency of
the software development.
The iterative waterfall model provides feedback paths from every phase to its preceding
phases, which is the main difference from the classical waterfall model.
7|Page
Feedback paths introduced by the iterative waterfall model are shown in the figure below.
When errors are detected at some later phase, these feedback paths allow correcting errors
committed by programmers during some phase. The feedback paths allow the phase to be reworked
in which errors are committed and these changes are reflected in the later phases. But, there is
no feedback path to the stage – feasibility study, because once a project has been taken, does not
give up the project easily.
It is good to detect errors in the same phase in which they are committed. It reduces the effort
and time required to correct the errors.
Phase Containment of Errors: The principle of detecting errors as close to their points of
commitment as possible is known as Phase containment of errors.
Advantages of Iterative Waterfall Model
Feedback Path: In the classical waterfall model, there are no feedback paths, so there is no
mechanism for error correction. But in iterative waterfall model feedback path from one
phase to its preceding phase allows correcting the errors that are committed and these
changes are reflected in the later phases.
Simple: Iterative waterfall model is very simple to understand and use. That’s why it is one of
the most widely used software development models.
Drawbacks of Iterative Waterfall Model
Difficult to incorporate change requests: The major drawback of the iterative waterfall
model is that all the requirements must be clearly stated before starting of the development
phase. Customer may change requirements after some time but the iterative waterfall model
does not leave any scope to incorporate change requests that are made after development
phase starts.
Incremental delivery not supported: In the iterative waterfall model, the full software is
completely developed and tested before delivery to the customer. There is no scope for any
intermediate delivery. So, customers have to wait long for getting the software.
8|Page
Overlapping of phases not supported: Iterative waterfall model assumes that one phase can
start after completion of the previous phase, But in real projects, phases may overlap to
reduce the effort and time needed to complete the project.
Risk handling not supported: Projects may suffer from various types of risks. But, Iterative
waterfall model has no mechanism for risk handling.
Limited customer interactions: Customer interaction occurs at the start of the project at
the time of requirement gathering and at project completion at the time of software
delivery. These fewer interactions with the customers may lead to many problems as the
finally developed software may differ from the customers’ actual requirements.
Spiral Model
Spiral model is one of the most important Software Development Life Cycle models, which
provides support for Risk Handling. In its diagrammatic representation, it looks like a spiral with
many loops. The exact number of loops of the spiral is unknown and can vary from project to
project. Each loop of the spiral is called a Phase of the software development process. The
exact number of phases needed to develop the product can be varied by the project manager
depending upon the project risks. As the project manager dynamically determines the number of
phases, so the project manager has an important role to develop a product using spiral model.
The Radius of the spiral at any point represents the expenses(cost) of the project so far, and the
angular dimension represents the progress made so far in the current phase.
Each phase of Spiral Model is divided into four quadrants as shown in the above figure. The
functions of these four quadrants are discussed below-
of every phase. Then alternative solutions possible for the phase are proposed in this
quadrant.
2. Identify and resolve Risks: During the second quadrant all the possible solutions are
evaluated to select the best possible solution. Then the risks associated with that solution is
identified and the risks are resolved using the best possible strategy. At the end of this
quadrant, Prototype is built for the best possible solution.
3. Develop next version of the Product: During the third quadrant, the identified features are
developed and verified through testing. At the end of the third quadrant, the next version of
the software is available.
4. Review and plan for the next Phase: In the fourth quadrant, the Customers evaluate the so
far developed version of the software. In the end, planning for the next phase is started.
Risk Handling in Spiral Model
A risk is any adverse situation that might affect the successful completion of a software project.
The most important feature of the spiral model is handling these unknown risks after the project
has started. Such risk resolutions are easier done by developing a prototype. The spiral model
supports coping up with risks by providing the scope to build a prototype at every phase of the
software development.
Prototyping Model also support risk handling, but the risks must be identified completely before
the start of the development work of the project. But in real life project risk may occur after the
development work starts, in that case, we cannot use Prototyping Model. In each phase of the
Spiral Model, the features of the product dated and analyzed and the risks at that point of time
are identified and are resolved through prototyping. Thus, this model is much more flexible
compared to other SDLC models.
Why Spiral Model is called Meta Model ?
The Spiral model is called as a Meta Model because it subsumes all the other SDLC models. For
example, a single loop spiral actually represents the Iterative Waterfall Model. The spiral model
incorporates the stepwise approach of the Classical Waterfall Model. The spiral model uses the
approach of Prototyping Model by building a prototype at the start of each phase as a risk
handling technique. Also, the spiral model can be considered as supporting the evolutionary model –
the iterations along the spiral can be considered as evolutionary levels through which the complete
system is built.
Advantages of Spiral Model: Below are some of the advantages of the Spiral Model.
Risk Handling: The projects with many unknown risks that occur as the development
proceeds, in that case, Spiral Model is the best development model to follow due to the risk
analysis and risk handling at every phase.
Good for large projects: It is recommended to use the Spiral Model in large and complex
projects.
Flexibility in Requirements: Change requests in the Requirements at later phase can be
incorporated accurately by using this model.
Customer Satisfaction: Customer can see the development of the product at the early phase
of the software development and thus, they habituated with the system by using it before
completion of the total product.
Disdvantages of Spiral Model: Below are some of the main disadvantages of the spiral model.
Complex: The Spiral Model is much more complex than other SDLC models.
Expensive: Spiral Model is not suitable for small projects as it is expensive.
10 | P a g e
Too much dependable on Risk Analysis: The successful completion of the project is very
much dependent on Risk Analysis. Without very highly experienced expertise, it is going to be
a failure to develop a project using this model.
Difficulty in time management: As the number of phases is unknown at the start of the
project, so time estimation is very difficult.
First, a simple working system implementing only a few basic features is built and then that is
delivered to the customer. Then thereafter many successive iterations/ versions are implemented
and delivered to the customer until the desired system is released.
A, B, C are modules of Software Product that are incrementally developed and delivered.
As each successive version of the software is constructed and delivered, now the feedback of the
Customer is to be taken and these were then incorporated in the next version. Each version of the
software have more additional features over the previous ones.
11 | P a g e
After Requirements gathering and specification, requirements are then spitted into several
different versions starting with version-1, in each successive increment, next version is
constructed and then deployed at the customer site. After the last version (version n), it is now
deployed at the client site.
2. Parallel Development Model – Different subsystems are developed at the same time. It can
decrease the calendar time needed for the development, i.e. TTM (Time to Market), if enough
Resources are available.
The Rapid Application Development Model was first proposed by IBM in 1980’s. The critical
feature of this model is the use of powerful development tools and techniques.
A software project can be implemented using this model if the project can be broken down into
small modules wherein each module can be assigned independently to separate teams. These
modules can finally be combined to form the final product.
Development of each module involves the various basic steps as in waterfall model i.e analyzing,
designing, coding and then testing, etc. as shown in the figure.
Another striking feature of this model is a short time span i.e the time frame for delivery(time-
box) is generally 60-90 days.
The use of powerful developer tools such as JAVA, C++, Visual BASIC, XML, etc. is also an integral
part of the projects.
1. Requirements Planning –
It involves the use of various techniques used in requirements elicitation like brainstorming,
task analysis, form analysis, user scenarios, FAST (Facilitated Application Development
Technique), etc. It also consists of the entire structured plan describing the critical data,
methods to obtain it and then processing it to form final refined model.
2. User Description –
This phase consists of taking user feedback and building the prototype using developer tools.
In other words, it includes re-examination and validation of the data collected in the first
phase. The dataset attributes are also identified and elucidated in this phase.
13 | P a g e
3. Construction –
In this phase, refinement of the prototype and delivery takes place. It includes the actual
use of powerful automated tools to transform process and data models into the final working
product. All the required modifications and enhancements are too done in this phase.
4. Cutover –
All the interfaces between the independent modules developed by separate teams have to be
tested properly. The use of powerfully automated tools and subparts makes testing easier.
This is followed by acceptance testing by the user.
The process involves building a rapid prototype, delivering it to the customer and the taking
feedback. After validation by the customer, SRS document is developed and the design is finalised.
Advantages –
Use of reusable components helps to reduce the cycle time of the project.
Feedback from the customer is available at initial stages.
Reduced costs as fewer developers are required.
Use of powerful development tools results in better quality products in comparatively shorter
time spans.
The progress and development of the project can be measured through the various stages.
It is easier to accommodate changing requirements due to the short iteration time spans.
Disadvantages –
The use of powerful and efficient tools requires highly skilled professionals.
The absence of reusable components can lead to failure of the project.
The team leader must work closely with the developers and customers to close the project in
time.
The systems which cannot be modularized suitably cannot use this model.
Customer involvement is required throughout the life cycle.
It is not meant for small scale projects as for such cases, the cost of using automated tools
and techniques may exceed the entire budget of the project.
Applications –
1. This model should be used for a system with known requirements and requiring short
development time.
2. It is also suitable for projects where requirements can be modularized and reusable
components are also available for development.
3. The model can also be used when already existing system components can be used in
developing a new system with minimum changes.
4. This model can only be used if the teams consist of domain experts. This is because relevant
knowledge and ability to use powerful techniques is a necessity.
5. The model should be chosen when the budget permits the use of automated tools and
techniques required.
14 | P a g e
can be reviewed and repeated as the Follows a predictive, inflexible and rigid
prototypes which are then used to develop Prototyping is difficult and requires
It is not necessary to know all the before starting the project due to
Separate small teams can be assigned to team is required for different stages
Generally preferred for projects with Used for projects with longer
shorter time durations and budgets large development schedules and where
enough to afford the use of automated tools budgets do not allow the use of
Use of reusable components helps to reduce The use of powerful and efficient tools
10. the cycle time of the project. requires highly skilled professionals.
In earlier days Iterative Waterfall model was very popular to complete a project. But nowadays
developers face various problems while using it to develop a software. The main difficulties
included handling change requests from customers during project development and the high cost
and time required to incorporate these changes. To overcome these drawbacks of Waterfall model,
in the mid-1990s the Agile Software Development model was proposed.
16 | P a g e
The Agile model was primarily designed to help a project to adapt to change requests quickly. So,
the main aim of the Agile model is to facilitate quick project completion. To accomplish this task
agility is required. Agility is achieved by fitting the process to the project, removing activities
that may not be essential for a specific project. Also, anything that is wastage of time and effort
is avoided.
Actually Agile model refers to a group of development processes. These processes share some
basic characteristics but do have certain subtle differences among themselves. A few Agile SDLC
models are given below:
Crystal
Atern
Feature-driven development
Scrum
Extreme programming (XP)
Lean development
Unified process
In the Agile model, the requirements are decomposed into many small parts that can be
incrementally developed. The Agile model adopts Iterative development. Each incremental part is
developed over an iteration. Each iteration is intended to be small and easily manageable and that
can be completed within a couple of weeks only. At a time one iteration is planned, developed and
deployed to the customers. Long-term plans are not made.
Agile model is the combination of iterative and incremental process models. Steps involve in agile
SDLC models are:
Requirement gathering
Requirement Analysis
Design
Coding
Unit testing
Acceptance testing
The time to complete an iteration is known as a Time Box. Time-box refers to the maximum amount
of time needed to deliver an iteration to customers. So, the end date for an iteration does not
change. Though the development team can decide to reduce the delivered functionality during a
Time-box if necessary to deliver it on time. The central principle of the Agile model is the delivery
of an increment to the customer after each Time-box.
It emphasizes on having efficient team members and enhancing communications among them
is given more importance. It is realized that enhanced communication among the development
team members can be achieved through face-to-face communication rather than through the
exchange of formal documents.
It is recommended that the development team size should be kept small (5 to 9 peoples) to
help the team members meaningfully engage in face-to-face communication and have
collaborative work environment.
Agile development process usually deploy Pair Programming. In Pair programming, two
programmers work together at one work-station. One does coding while the other reviews the
code as it is typed in. The two programmers switch their roles every hour or so.
Advantages:
Working through Pair programming produce well written compact programs which has fewer
errors as compared to programmers working alone.
It reduces total development time of the whole project.
Customer representative get the idea of updated software products after each iretation. So,
it is easy for him to change any requirement if needed.
Disadvantages:
Due to lack of formal documents, it creates confusion and important decisions taken during
different phases can be misinterpreted at any time by different team members.
Due to absence of proper documentation, when the project completes and the developers are
assigned to another project, maintenance of the developed project can become a problem.
Agile is a time-bound, iterative approach to software delivery that builds software incrementally
from the start of the project, instead of trying to deliver all at once.
Why Agile?
Technology in this current era is progressing faster than ever, enforcing the global software
companies to work in a fast-paced changing environment. Because these businesses are operating in
an ever-changing environment, it is impossible to gather a complete and exhaustive set of software
requirements. Without these requirements, it becomes practically hard for any conventional
software model to work.
The conventional software models such as Waterfall Model that depends on completely specifying
the requirements, designing, and testing the system are not geared towards rapid software
development. As a consequence, a conventional software development model fails to deliver the
required product.
This is where the agile software development comes to the rescue. It was specially designed to
curate the needs of the rapidly changing environment by embracing the idea of incremental
development and develop the actual final product.
Let’s now read about the on which the Agile has laid its foundation:
Principles:
18 | P a g e
1. Highest priority is to satisfy the customer through early and continuous delivery of valuable
software.
2. It welcomes changing requirements, even late in development.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a
preference to the shortest timescale.
4. Build projects around motivated individuals. Give them the environment and the support they
need, and trust them to get the job done.
5. Working software is the primary measure of progress.
6. Simplicity the art of maximizing the amount of work not done is essential.
7. The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.
Development in Agile: Let’s see a brief overview of how development occurs in Agile philosophy.
In Agile development, Design and Implementation are considered to be the central activities
in the software process.
Design and Implementation phase also incorporate other activities such as requirements
elicitation and testing into it.
In an agile approach, iteration occurs across activities. Therefore, the requirements and the
design are developed together, rather than separately.
The allocation of requirements and the design planning and development as executed in a
series of increments. In contrast with the conventional model, where requirements gathering
needs to be completed in order to proceed to the design and development phase, it gives Agile
development an extra level of flexibility.
An agile process focuses more on code development rather than documentation.
Example: Let’s go through an example to understand clearly about how agile actually works.
A Software company named ABC wants to make a new web browser for the latest release of its
operating system. The deadline for the task is 10 months. The company’s head assigned two teams
named Team A and Team B for this task. In order to motivate the teams, the company head says
that the first team to develop the browser would be given a salary hike and a one week full
sponsored travel plan. With the dreams of their wild travel fantasies, the two teams set out on the
journey of the web browser. The team A decided to play by the book and decided to choose the
Waterfall model for the development. Team B after a heavy discussion decided to take a leap of
faith and choose Agile as their development model.
The Development plan of the Team A is as follows:
Since this was an Agile, the project was broken up into several iterations.
The iterations are all of the same time duration.
19 | P a g e
At the end of each iteration, a working product with a new feature has to be delivered.
Instead of Spending 1.5 months on requirements gathering, They will decide the core
features that are required in the product and decide which of these features can be
developed in the first iteration.
Any remaining features that cannot be delivered in the first iteration will be delivered in the
next subsequent iteration, based in the priority
At the end of the first iterations, the team will deliver a working software with the core
basic features.
Both the team have put their best efforts to get the product to a complete stage. But then out of
blue due to the rapidly changing environment, the company’s head come up with an entirely new set
of features and want to be implemented as quickly as possible and wanted to push out a working
model in 2 days. Team A was now in a fix, they were still in their design phase and did not yet
started coding and they had no working model to display. And moreover, it was practically
impossible for them to implement new features since waterfall model there is not reverting back
to the old phase once you proceed to the next stage, that means they would have to start from the
square one again. That would incur them heavy cost and a lot of overtime. Team B was ahead of
Team A in a lot of aspects, all thanks to Agile Development. They also had the working product
with most of the core requirement since the first increment. And it was a piece of cake for them
to add the new requirements. All they had to do is schedule these requirements for the next
increment and then implement them.
Advantages:
Deployment of software is quicker and thus helps in increasing the trust of the customer.
Can better adapt to rapidly changing requirements and respond faster.
Helps in getting immediate feedback which can be used to improve the software in the next
increment.
People – Not Process. People and interactions are given a higher priority rather than process
and tools.
Continuous attention to technical excellence and good design.
Disadvantages:
In case of large software projects, it is difficult to assess the effort required at the initial
stages of the software development life cycle.
The Agile Development is more code focused and produces less documentation.
Agile development is heavily depended on the inputs of the customer. If the customer has
ambiguity in his vision of the final outcome, it is highly likely for the project to get off track.
Face to Face communication is harder in large-scale organizations.
Only senior programmers are capable of taking the kind of decisions required during the
development process. Hence it’s a difficult situation for new programmers to adapt to the
environment.
Agile is a framework which defines how the software development needs to be carried on. Agile is
not a single method, it represents the various collection of methods and practices that follow the
value statements provided in the manifesto. Agile methods and practices do not promise to solve
every problem present in the software industry (No Software model ever can). But they sure help
to establish a culture and environment where solutions emerge.
20 | P a g e
Extreme programming (XP) is one of the most important software development framework of Agile
models. It is used to improve software quality and responsive to customer requirements. The
extreme programming model recommends taking the best practices that have worked well in the
past in program development projects to extreme levels.
Good practices needs to practiced extreme programming: Some of the good practices that have
been recognized in the extreme programming model and suggested to maximize their use are given
below:
Code Review: Code review detects and corrects errors efficiently. It suggests pair
programming as coding and reviewing of written code carried out by a pair of programmers
who switch their works between them every hour.
Testing: Testing code helps to remove errors and improves its reliability. XP suggests test-
driven development (TDD) to continually write and execute test cases. In the TDD approach
test cases are written even before any code is written.
Incremental development: Incremental development is very good because customer
feedback is gained and based on this development team come up with new increments every
few days after each iteration.
Simplicity: Simplicity makes it easier to develop good quality code as well as to test and
debug it.
Design: Good quality design is important to develop a good quality software. So, everybody
should design daily.
Integration testing: It helps to identify bugs at the interfaces of different functionalities.
Extreme programming suggests that the developers should achieve continuous integration by
building and performing integration testing several times a day.
Basic principles of Extreme programming: XP is based on the frequent iteration through which
the developers implement User Stories. User stories are simple and informal statements of the
customer about the functionalities needed. A User story is a conventional description by the user
about a feature of the required system. It does not mention finer details such as the different
scenarios that can occur. On the basis of User stories, the project team proposes Metaphors.
Metaphors are a common vision of how the system would work. The development team may decide
to build a Spike for some feature. A Spike is a very simple program that is constructed to explore
the suitability of a solution being proposed. It can be considered similar to a prototype. Some of
the basic activities that are followed during software development by using XP model are given
below:
Coding: The concept of coding which is used in XP model is slightly different from traditional
coding. Here, coding activity includes drawing diagrams (modeling) that will be transformed
into code, scripting a web-based system and choosing among several alternative solutions.
Testing: XP model gives high importance on testing and considers it be the primary factor to
develop a fault-free software.
Listening: The developers needs to carefully listen to the customers if they have to develop a
good quality software. Sometimes programmers may not have the depth knowledge of the
21 | P a g e
system to be developed. So, it is desirable for the programmers to understand properly the
functionality of the system and they have to listen to the customers.
Designing: Without a proper design, a system implementation becomes too complex and very
difficult to understand the solution, thus it makes maintenance expensive. A good design
results elimination of complex dependencies within a system. So, effective use of suitable
design is emphasized.
Feedback: One of the most important aspects of the XP model is to gain feedback to
understand the exact customer needs. Frequent contact with the customer makes the
development effective.
Simplicity: The main principle of the XP model is to develop a simple system that will work
efficiently in present time, rather than trying to build something that would take time and it
may never be used. It focuses on some specific features that are immediately needed, rather
than engaging time and effort on speculations of future requirements.
Applications of Extreme Programming (XP): Some of the projects that are suitable to develop
using XP model are given below:
Small projects: XP model is very useful in small projects consisting of small teams as face to
face meeting is easier to achieve.
Projects involving new technology or Research projects: This type of projects face
changing of requirements rapidly and technical problems. So XP model is used to complete
this type of projects.
SDLC V-Model
The V-model is a type of SDLC model where process executes in a sequential manner in V-shape. It
is also known as Verification and Validation model. It is based on the association of a testing phase
for each corresponding development stage. Development of each step directly associated with the
testing phase. The next phase starts only after completion of the previous phase i.e. for each
development activity, there is a testing activity corresponding to it.
Verification: It involves static analysis technique (review) done without executing code. It is the
process of evaluation of the product development phase to find whether specified requirements
meet.
22 | P a g e
Design Phase:
Requirement Analysis: This phase contains detailed communication with the customer to
understand their requirements and expectations. This stage is known as Requirement
Gathering.
System Design: This phase contains the system design and the complete hardware and
communication setup for developing product.
Architectural Design: System design is broken down further into modules taking up different
functionalities. The data transfer and communication between the internal modules and with
the outside world (other systems) is clearly understood.
Module Design: In this phase the system breaks dowm into small modules. The detailed
design of modules is specified, also known as Low-Level Design (LLD).
Testing Phases:
Unit Testing: Unit Test Plans are developed during module design phase. These Unit Test
Plans are executed to eliminate bugs at code or unit level.
Integration testing: After completion of unit testing Integration testing is performed. In
integration testing, the modules are integrated and the system is tested. Integration testing
is performed on the Architecture design phase. This test verifies the communication of
modules among themselves.
System Testing: System testing test the complete application with its functionality, inter
dependency, and communication.It tests the functional and non-functional requirements of
the developed application.
User Acceptance Testing (UAT): UAT is performed in a user environment that resembles
the production environment. UAT verifies that the delivered system meets user’s
requirement and system is ready for use in real world.
Industrial Challange: As the industry has evolved, the technologies have become more complex,
increasingly faster, and forever changing, however, there remains a set of basic principles and
concepts that are as applicable today as when IT was in its infancy.
Accurately define and refine user requirements.
Design and build an application according to the authorized user requirements.
Validate that the application they had built adhered to the authorized business requirements.
Principles of V-Model:
Large to Small: In V-Model, testing is done in a hierarchical perspective, For example,
requirements identified by the project team, create High-Level Design, and Detailed Design
phases of the project. As each of these phases is completed the requirements, they are
defining become more and more refined and detailed.
Data/Process Integrity: This principle states that the successful design of any project
requires the incorporation and cohesion of both data and processes. Process elements must
be identified at each and every requirements.
23 | P a g e
Scalability: This principle states that the V-Model concept has the flexibility to
accommodate any IT project irrespective of its size, complexity or duration.
Cross Referencing: Direct correlation between requirements and corresponding testing
activity is known as cross-referencing.
Tangible Documentation: This principle states that every project needs to create a
document. This documentation is required and applied by both the project development team
and the support team. Documentation is used to maintaining the application once it is available
in a production environment.
Why preferred?
It is easy to manage due to the rigidity of the model. Each phase of V-Model has specific
deliverables and a review process.
Proactive defect tracking – that is defects are found at early stage.
When to use?
Where requirements are clearly defined and fixed.
The V-Model is used when ample technical resources are available with technical expertise.
Advantages:
This is a highly disciplined model and Phases are completed one at a time.
V-Model is used for small projects where project requirements are clear.
Simple and easy to understand and use.
This model focuses on verification and validation activities early in the life cycle thereby
enhancing the probability of building an error-free and good quality product.
It enables project management to track progress accurately.
Disadvantages:
High risk and uncertainty.
It is not a good for complex and object-oriented projects.
It is not suitable for projects where requirements are not clear and contains high risk of
changing.
This model does not support iteration of phases.
It does not easily handle concurrent events.
Classical Waterfall Model: The Classical Waterfall model can be considered as the basic model
and all other life cycle models are based on this model. It is an ideal model. However, the Classical
Waterfall model cannot be used in practical project development, since this model does not
support any mechanism to correct the errors that are committed during any of the phases but
detected at a later phase. This problem is overcome by the Iterative Waterfall model through the
inclusion of feedback paths.
Iterative Waterfall Model: The Iterative Waterfall model is probably the most used software
development model. This model is simple to use and understand. But this model is suitable only for
well-understood problems and is not suitable for the development of very large projects and
projects that suffer from a large number of risks.
Prototyping Model: The Prototyping model is suitable for projects, which either the customer
requirements or the technical solutions are not well understood. This risks must be identified
24 | P a g e
before the project starts. This model is especially popular for the development of the user
interface part of the project.
Evolutionary Model: The Evolutionary model is suitable for large projects which can be
decomposed into a set of modules for incremental development and delivery. This model is widely
used in object-oriented development projects. This model is only used if incremental delivery of
the system is acceptable to the customer.
Spiral Model: The Spiral model is considered as a meta-model as it includes all other life cycle
models. Flexibility and risk handling are the main characteristics of this model. The spiral model is
suitable for the development of technically challenging and large software that is prone to various
risks that are difficult to anticipate at the start of the project. But this model is very much
complex than the other models.
Agile Model: The Agile model was designed to incorporate change requests quickly. In this model,
requirements are decomposed into small parts that can be incrementally developed. But the main
principle of the Agile model is to deliver an increment to the customer after each Time-box. The
end date of an iteration is fixed, it can’t be extended. This agility is achieved by removing
unnecessary activities that waste time and effort.
Selection of appropriate life cycle model for a project: Selection of proper lifecycle model to
complete a project is the most important task. It can be selected by keeping the advantages and
disadvantages of various models in mind. The different issues that are analyzed before selecting a
suitable life cycle model are given below :
Characteristics of the software to be developed: The choice of the life cycle model largely
depends on the type of the software that is being developed. For small services projects, the
agile model is favored. On the other hand, for product and embedded development, the
Iterative Waterfall model can be preferred. The evolutionary model is suitable to develop an
object-oriented project. User interface part of the project is mainly developed through
prototyping model.
Characteristics of the development team: Team member’s skill level is an important factor
to deciding the life cycle model to use. If the development team is experienced in developing
similar software, then even an embedded software can be developed using the Iterative
Waterfall model. If the development team is entirely novice, then even a simple data
processing application may require a prototyping model.
Risk associated with the project: If the risks are few and can be anticipated at the start of
the project, then prototyping model is useful. If the risks are difficult to determine at the
beginning of the project but are likely to increase as the development proceeds, then the
spiral model is the best model to use.
Characteristics of the customer: If the customer is not quite familiar with computers, then
the requirements are likely to change frequently as it would be difficult to form complete,
consistent and unambiguous requirements. Thus, a prototyping model may be necessary to
reduce later change requests from the customers. Initially, the customer’s confidence is high
on the development team. During the lengthy development process, customer confidence
normally drops off as no working software is yet visible. So, the evolutionary model is useful
as the customer can experience a partially working software much earlier than whole
25 | P a g e
complete software. Another advantage of the evolutionary model is that it reduces the
customer’s trauma of getting used to an entirely new system.
User interface is the front-end application view to which user interacts in order to use the
software. The software becomes more popular if its user interface is:
Attractive
Simple to use
Responsive in short time
Clear to understand
Consistent on all interface screens
There are two types of User Interface:
1. Command Line Interface: Command Line Interface provides a command prompt, where the
user types the command and feeds to the system. The user needs to remember the syntax of
the command and its use.
2. Graphical User Interface: Graphical User Interface provides the simple interactive
interface to interact with the system. GUI can be a combination of both hardware and
software. Using GUI, user interprets the software.
User Interface Design Process:
The analysis and design process of a user interface is iterative and can be represented by a spiral
model. The analysis and design process of user interface consists of four framework activities.
1. User, task, environmental analysis, and modeling: Initially, the focus is based on the
profile of users who will interact with the system, i.e. understanding, skill and knowledge, type
of user, etc, based on the user’s profile users are made into categories. From each category
requirements are gathered. Based on the requirements developer understand how to develop
the interface. Once all the requirements are gathered a detailed analysis is conducted. In the
analysis part, the tasks that the user performs to establish the goals of the system are
identified, described and elaborated. The analysis of the user environment focuses on the
physical work environment. Among the questions to be asked are:
26 | P a g e
Reduce demand on short-term memory: When users are involved in some complex tasks the
demand on short-term memory is significant. So the interface should be designed in such a
way to reduce the remembering of previously done actions, given inputs and results.
Establish meaningful defaults: Always initial set of defaults should be provided to the
average user, if a user needs to add some new features then he should be able to add the
required features.
Define shortcuts that are intuitive: Mnemonics should be used by the user. Mnemonics means
the keyboard shortcuts to do some action on the screen.
The visual layout of the interface should be based on a real-world metaphor: Anything you
represent on a screen if it is a metaphor for real-world entity then users would easily
understand.
Disclose information in a progressive fashion: The interface should be organized
hierarchically i.e. on the main screen the information about the task, an object or some
behavior should be presented first at a high level of abstraction. More detail should be
presented after the user indicates interest with a mouse pick.
Make the interface consistent:
Allow the user to put the current task into a meaningful context: Many interfaces have
dozens of screens. So it is important to provide indicators consistently so that the user know
about the doing work. The user should also know from which page has navigated to the
current page and from the current page where can navigate.
Maintain consistency across a family of applications: The development of some set of
applications all should follow and implement the same design, rules so that consistency is
maintained among applications.
If past interactive models have created user expectations do not make changes unless there
is a compelling reason.
Introduction: The purpose of Design phase in the Software Development Life Cycle is to produce a
solution to a problem given in the SRS(Software Requirement Specification) document. The output
of the design phase is Sofware Design Document (SDD).
Basically, design is a two-part iterative process. First part is Conceptual Design that tells the
customer what the system will do. Second is Technical Design that allows the system builders to
understand the actual hardware and software needed to solve customer’s problem.
28 | P a g e
Types of Coupling:
Data Coupling: If the dependency between the modules is based on the fact that they
communicate by passing only data, then the modules are said to be data coupled. In data
coupling, the components are independent to each other and communicating through data.
Module communications don’t contain tramp data. Example-customer billing system.
Stamp Coupling In stamp coupling, the complete data structure is passed from one module to
another module. Therefore, it involves tramp data. It may be necessary due to efficiency
factors- this choice made by the insightful designer, not a lazy programmer.
Control Coupling: If the modules communicate by passing control information, then they are
said to be control coupled. It can be bad if parameters indicate completely different
behavior and good if parameters allow factoring and reuse of functionality. Example- sort
function that takes comparison function as an argument.
29 | P a g e
External Coupling: In external coupling, the modules depend on other modules, external to
the software being developed or to a particular type of hardware. Ex- protocol, external file,
device format, etc.
Common Coupling: The modules have shared data such as global data structures.The changes
in global data mean tracing back to all modules which access that data to evaluate the effect
of the change. So it has got disadvantages like difficulty in reusing modules, reduced ability
to control data accesses and reduced maintainability.
Content Coupling: In a content coupling, one module can modify the data of another module or
control flow is passed from one module to the other module. This is the worst form of
coupling and should be avoided.
Cohesion: Cohesion is a measure of the degree to which the elements of the module are
functionally related. It is the degree to which all elements directed towards performing a single
task are contained in the component. Basically, cohesion is the internal glue that keeps the module
together. A good software design will have high cohesion.
Types of Cohesion:
Functional Cohesion: Every essential element for a single computation is contained in the
component. A functional cohesion performs the task and functions. It is an ideal situation.
Sequential Cohesion: An element outputs some data that becomes the input for other
element, i.e., data flow between the parts. It occurs naturally in functional programming
languages.
Communicational Cohesion: Two elements operate on the same input data or contribute
towards the same output data. Example- update record int the database and send it to the
printer.
Procedural Cohesion: Elements of procedural cohesion ensure the order of execution. Actions
are still weakly connected and unlikely to be reusable. Ex- calculate student GPA, print
student record, calculate cumulative GPA, print cumulative GPA.
Temporal Cohesion: The elements are related by their timing involved. A module connected
with temporal cohesion all the tasks must be executed in the same time-span. This cohesion
contains the code for initializing all the parts of the system. Lots of different activities
occur, all at init time.
Logical Cohesion: The elements are logically related and not functionally. Ex- A component
reads inputs from tape, disk, and network. All the code for these functions is in the same
component. Operations are related, but the functions are significantly different.
Coincidental Cohesion: The elements are not related(unrelated). The elements have no
conceptual relationship other than location in source code. It is accidental and the worst form
of cohesion. Ex- print next line and reverse the characters of a string in a single component.
30 | P a g e
IN a large organisation, the database system is typically part of the information system which
includes all the resources that are involved in the collection, management, use and dissemination of
the information resources of the organisation. In the today’s world these resource includes the
data itself, DBMS software, the computer system software and storage media, the person who
uses and manages the data and the application programmers who develop these application. Thus
the database system is a part of much larger organizational information system.
In this article we will discuss about typical life cycle of an information system, and how the
database fits into this life cycle. Information cycle is also known as Macro life cycle.
These cycle typically includes following phases:
1. Feasibility Analysis –
This phase basically concerned with following points:
(a) Analyzing potential application areas.
(b) Identifying the economics of information gathering.
(c) Performing preliminary cost benefit studies.
(d) Determining the complexity of data and processes.
(e) Setting up priorities among application.
2. Requirements Collection and Analysis –
In this phase we basically do the following points:
(a) Detailed requirements are collected by interacting with potential users and groups to
identify their particular problems and needs.
(b) Inter application dependencies are identified.
(c) Communication and reporting procedures are identified.
3. Design
This phase has following two aspects:
(a) Design of database
(b) Design of application system that uses and process the database.
4. Implementation –
In this phase following steps are implemented:
(a) The information system is implemented
(b) The database is loaded.
(c) The database transaction are implemented and tested.
5. Validation and Acceptance Testing –
The acceptability of the system is meeting’s users requirements and performance criteria is
validated. The system is tested against performance criteria and behavior specification.
6. Deployment operation and maintenance –
This may be preceded by conversion of users from older system as well as by user training.
The operational phase starts when all system function are operational and have been
validated.As new requirements or application crop up, they pass through all the previous
phases until they are validated and incorporated into system. Monitoring and system
maintenance are important activities during operational phase.
31 | P a g e
1. Real-world requirements
2. Analyzing the real-world requirements
3. To design the data and functions of the system
4. Implementing the operations in the system.
Activities related to the database application system (micro) life cycle include the following:
1. System Definition –
The scope of the database system, its users, and its application are defined.The interfaces for
various categories of users, the response time constraints and storage and processing needs are
identified.
2. Database design –
At the end of this phase a complete logical and physical design of the database system on the
chosen DBMS is ready.
3. Database implementation –
This comprises the process of specifying the conceptual, external, and internal database definition
creating empty database files, and implementing the software application.
4. Loading or data conversion –
The database is populated either by loading the data directly or by converting existing files into
database system format.
5. Application conversion –
Any software application from a previous system are converted to the new system.
6. Testing and Validation –
The new system is tested and validated. Testing and validation of application programs can be a
very involved process and the techniques that are employed are usually covered in the software
engineering course. The automated tools that assist in the process.
7. Operation –
The Database system and its application are put into operation Usually the old and new system are
operated in parallel for some time.
8. Monitoring and Maintenance –
System is constantly monitored and maintained during the operational phase. Growth and expansion
can occur in both data content and software application.
Database basically needs to be modified and recognized from time to time.
Activities 2, 3, 4 are part of the design and implementation phase of the larger information system
life cycle. Most databases in organization undergo all the preceding lifecycle activities. Most
databases undergo all the preceding life cycle activities. The conversion activities(4 and 5) are not
applicable when both the database and application activities are new.When an organization moves
from an established system to a new one activities 4 and 5 tend to be the most consuming and the
effort to accomplish them is often underestimated. In general there is often feedback among the
various steps because new requirements frequently arises at every stage.
32 | P a g e
Pham Nordmann Zhang (PNZ) model is used to evaluate reliability prediction of a component-
based system or a software and fault tolerance structures techniques.PNZ is considered to be one
of the best models, which is based on the non homogeneous Poisson process(NHPP).
Our goal is to produce a reliability prediction tool using PNZ models based on reliability predictions
and careful analysis of the sensitivity of various models. Therefore PNZ enables us to analyse that
how much reliability of a software system can be improved by using fault tolerance structures
techniques which are later discussed in this section.
Project Management is the application of knowledge, skills, tools and techniques to project
activities to meet the project requirements.
Feasibility Study:
Feasibility study explores system requirements to determine project feasibility. There are several
fields of feasibility study including economic feasibility, operational feasibility, technical
feasibility. The goal is to determine whether the system can be implemented or not. The process
of feasibility study takes as input the requirement details as specified by the user and other
domain-specific details. The output of this process simply tells whether the project should be
undertaken or not and if yes, what would the constraints be. Additionally, all the risks and their
potential effects on the projects are also evaluated before a decision to start the project is
taken.
Project Planning:
A detailed plan stating stepwise strategy to achieve the listed objectives is an integral part of any
project.
Planning consists of the following activities:
Set objectives or goals
Develop strategies
Develop project policies
Determine courses of action
Making planning decisions
Set procedures and rules for the project
Develop a software project plan
34 | P a g e
Prepare budget
Conduct risk management
Document software project plans
This step also involves the construction of a work breakdown structure(WBS). It also includes size,
effort, schedule and cost estimation using various techniques.
Project Execution:
A project is executed by choosing an appropriate software development lifecycle model(SDLC). It
includes a number of steps including requirements analysis, design, coding, testing and
implementation, testing, delivery and maintenance. There are a number of factors that need to be
considered while doing so including the size of the system, the nature of the project, time and
budget constraints, domain requirements, etc. An inappropriate SDLC can lead to failure of the
project.
Project Termination:
There can be several reasons for the termination of a project. Though expecting a project to
terminate after successful completion is conventional, but at times, a project may also terminate
without completion. Projects have to be closed down when the requirements are not fulfilled
according to given time and cost constraints.
Some of the reasons for failure include:
Fast changing technology
Project running out of time
Organizational politics
Too much change in customer requirements
Project exceeding budget or funds
Once the project is terminated, a post-performance analysis is done. Also, a final report is
published describing the experiences, lessons learned, recommendations for handling future
projects.
Estimation of the size of software is an essential part of Software Project Management. It helps
the project manager to further predict the effort and time which will be needed to build the
project. Various measures are used in project size estimation. Some of these are:
Lines of Code
Number of entities in ER diagram
Total number of processes in detailed data flow diagram
Function points
1. Lines of Code (LOC): As the name suggest, LOC count the total number of lines of source code
in a project. The units of LOC are:
KLOC- Thousand lines of code
NLOC- Non comment lines of code
KDSI- Thousands of delivered source instruction
35 | P a g e
The size is estimated by comparing it with the existing systems of same kind. The experts use it
to predict the required size of various components of software and then add them to get the total
size.
Advantages:
Universally accepted and is used in many models like COCOMO.
Estimation is closer to developer’s perspective.
Simple to use.
Disadvantages:
Different programming languages contains different number of lines.
No proper industry standard exist for this technique.
It is difficult to estimate the size using this technique in early stages of project.
2. Number of entities in ER diagram: ER model provides a static view of the project. It
describes the entities and its relationships. The number of entities in ER model can be used to
measure the estimation of size of project. Number of entities depends on the size of the project.
This is because more entities needed more classes/structures thus leading to more coding.
Advantages:
Size estimation can be done during initial stages of planning.
Number of entities is independent of programming technologies used.
Disadvantages:
No fixed standards exist. Some entities contribute more project size than others.
Just like FPA, it is less used in cost estimation model. Hence, it must be converted to LOC.
3. Total number of processes in detailed data flow diagram: Data Flow Diagram(DFD)
represents the functional view of a software. The model depicts the main processes/functions
involved in software and flow of data between them. Utilization of number of functions in DFD to
predict software size. Already existing processes of similar type are studied and used to estimate
the size of the process. Sum of the estimated size of each process gives the final estimated size.
Advantages:
It is independent of programming language.
Each major processes can be decomposed into smaller processes. This will increase the
accuracy of estimation
Disadvantages:
Studying similar kind of processes to estimate size takes additional time and effort.
All software projects are not required to construction of DFD.
4. Function Point Analysis: In this method, the number and type of functions supported by the
software are utilized to find FPC(function point count). The steps in function point analysis are:
Count the number of functions of each proposed type.
Compute the Unadjusted Function Points(UFP).
Find Total Degree of Influence(TDI).
Compute Value Adjustment Factor(VAF).
Find the Function Point Count(FPC).
The explanation of above points given below:
Count the number of functions of each proposed type: Find the number of functions
belonging to the following types:
External Inputs: Functions related to data entering the system.
36 | P a g e
External Inputs 3 4 6
External Output 4 5 7
External Inquiries 3 4 6
Find Total Degree of Influence: Use the ’14 general characteristics’ of a system to find the
degree of influence of each of them. The sum of all 14 degrees of influences will give the
TDI. The range of TDI is 0 to 70. The 14 general characteristics are: Data Communications,
Distributed Data Processing, Performance, Heavily Used Configuration, Transaction Rate, On-
Line Data Entry, End-user Efficiency, Online Update, Complex Processing Reusability,
Installation Ease, Operational Ease, Multiple Sites and Facilitate Change.
Each of above characteristics is evaluated on a scale of 0-5.
Compute Value Adjustment Factor(VAF): Use the following formula to calculate VAF
VAF = (TDI * 0.01) + 0.65
Find the Function Point Count: Use the following formula to calculate FPC
FPC = UFP * VAF
Advantages:
It can be easily used in the early stages of project planning.
It is independing on the programming language.
It can be used to compare different projects even if they use different
technologies(database, language etc).
Disadvantages:
It is not good for real time systems and embedded systems.
37 | P a g e
Many cost estimation models like COCOMO uses LOC and hence FPC must be converted to
LOC.
Whenever a software is build, there is always scope for improvement and those improvements
brings changes in picture. Changes may be required to modify or update any existing solution or to
create a new solution for a problem. Requirements keeps on changing on daily basis and so we need
to keep on upgrading our systems based on the current requirements and needs to meet desired
outputs. Changes should be analyzed before they are made to the existing system, recorded
before they are implemented, reported to have details of before and after, and controlled in a
manner that will improve quality and reduce error. This is where the need of System Configuration
Management comes.
Suppose after some changes, the version of configuration object changes from 1.0 to 1.1.
Minor corrections and changes result in versions 1.1.1 and 1.1.2, which is followed by a major
update that is object 1.2. The development of object 1.0 continues through 1.3 and 1.4, but
finally, a noteworthy change to the object results in a new evolutionary path, version 2.0.
Both versions are currently supported.
38 | P a g e
3. Change control – Controlling changes to Configuration items (CI). The change control process
is explained in Figure below:
A change request (CR) is submitted and evaluated to assess technical merit, potential side
effects, overall impact on other configuration objects and system functions, and the
projected cost of the change. The results of the evaluation are presented as a change report,
which is used by a change control board (CCB) —a person or group who makes a final decision
on the status and priority of the change. An engineering change Request (ECR) is generated
for each approved change.
Also CCB notifies the developer in case the change is rejected with proper reason. The ECR
describes the change to be made, the constraints that must be respected, and the criteria
for review and audit. The object to be changed is ―checked out‖ of the project database, the
change is made, and then the object is tested again. The object is then ―checked in‖ to the
database and appropriate version control mechanisms are used to create the next version of
the software.
CLEAR CASETOOL (CC), SaltStack, CLEAR QUEST TOOL, Puppet, SVN- Subversion, Perforce,
TortoiseSVN, IBM Rational team concert, IBM Configuration management version management,
Razor, Ansible, etc. There are many more in the list.
It is recommended that before selecting any configuration management tool, have a proper
understanding of the features and select the tool which best suits your project needs and be clear
with the benefits and drawbacks of each before you choose one to use.
COCOMO Model
Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number of Lines of
Code. It is a procedural cost estimate model for software projects and often used as a process of
reliably predicting the various parameters associated with making a project such as size, effort,
cost, time and quality. It was proposed by Barry Boehm in 1970 and is based on the study of 63
projects, which make it one of the best-documented models.
The key parameters which define the quality of any software products, which are also an outcome
of the Cocomo are primarily Effort & Schedule:
Effort: Amount of labor that will be required to complete a task. It is measured in person-
months units.
Schedule: Simply means the amount of time required for the completion of the job, which is,
of course, proportional to the effort put. It is measured in the units of time such as weeks,
months.
Different models of Cocomo have been proposed to predict the cost estimation at different levels,
based on the amount of accuracy and correctness required. All of these models can be applied to a
variety of projects, whose characteristics determine the value of constant to be used in
subsequent calculations. These characteristics pertaining to different system types are mentioned
below.
1. Organic – A software project is said to be an organic type if the team size required is
adequately small, the problem is well understood and has been solved in the past and also the
team members have a nominal experience regarding the problem.
2. Semi-detached – A software project is said to be a Semi-detached type if the vital
characteristics such as team-size, experience, knowledge of the various programming
environment lie in between that of organic and Embedded. The projects classified as Semi-
Detached are comparatively less familiar and difficult to develop compared to the organic
ones and require more experience and better guidance and creativity. Eg: Compilers or
different Embedded Systems can be considered of Semi-Detached type.
3. Embedded – A software project with requiring the highest level of complexity, creativity,
and experience requirement fall under this category. Such software requires a larger team
size than the other two models and also the developers need to be sufficiently experienced
and creative to develop such complex models.
All the above system types utilize different values of the constants used in Effort Calculations.
40 | P a g e
Types of Models: COCOMO consists of a hierarchy of three increasingly detailed and accurate
forms. Any of the three forms can be adopted according to our requirements. These are types of
COCOMO model:
1. Basic COCOMO Model
2. Intermediate COCOMO Model
3. Detailed COCOMO Model
The first level, Basic COCOMO can be used for quick and slightly rough calculations of Software
Costs. Its accuracy is somewhat restricted due to the absence of sufficient factor considerations.
Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO additionally
accounts for the influence of individual project phases, i.e in case of Detailed it accounts for both
these cost drivers and also calculations are performed phase wise henceforth producing a more
accurate result. These two models are further discussed below.
Estimation of Effort: Calculations –
4. Basic Model –
5.
The above formula is used for the cost estimation of for the basic COCOMO model, and also
is used in the subsequent models. The constant values a and b for the Basic Model for the
different categories of system:
SOFTWARE PROJECTS A B
The effort is measured in Person-Months and as evident from the formula is dependent on
Kilo-Lines of code. These formulas are used as such in the Basic Model calculations, as not
much consideration of different factors such as reliability, expertise is taken into account,
henceforth the estimate is rough.
6. Intermediate Model –
The basic Cocomo model assumes that the effort is only a function of the number of lines of
code and some constants evaluated according to the different software system. However, in
reality, no system’s effort and schedule can be solely calculated on the basis of Lines of Code.
For that, various other factors such as reliability, experience, Capability. These factors are
known as Cost Drivers and the Intermediate Model utilizes 15 such drivers for cost
estimation.
VERY VERY
Product Attributes
Hardware Attributes
VERY VERY
Personnel attributes
Project Attributes
The project manager is to rate these 15 different parameters for a particular project on a
scale of one to three. Then, depending on these ratings, appropriate cost driver values are
43 | P a g e
taken from the above table. These 15 values are then multiplied to calculate the EAF (Effort
Adjustment Factor). The Intermediate COCOMO formula now takes the form:
SOFTWARE PROJECTS A B
7. Detailed Model –
Detailed COCOMO incorporates all characteristics of the intermediate version with an
assessment of the cost driver’s impact on each step of the software engineering process. The
detailed model uses different effort multipliers for each cost driver attribute. In detailed
cocomo, the whole software is divided into different modules and then we apply COCOMO in
different modules to estimate effort and then sum the effort.
The Six phases of detailed COCOMO are:
CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in
1987.
It is not a software process model. It is a framework which is used to analyse the approach
and techniques followed by any organization to develop a software product.
It also provides guidelines to further enhance the maturity of those software products.
It is based on profound feedback and development practices adopted by the most successful
organizations worldwide.
This model describes a strategy that should be followed by moving through 5 different levels.
44 | P a g e
Each level of maturity shows a process capability level. All the levels except level-1 are
further described by Key Process Areas (KPA’s).
Key Process Areas (KPA’s):
Each of these KPA’s defines the basic requirements that should be met by a software process in
order to satisfy the KPA and achieve that level of maturity.
Conceptually, key process areas form the basis for management control of the software project
and establish a context in which technical methods are applied, work products like models,
documents, data, reports, etc. are produced, milestones are established, quality is ensured and
change is properly managed.
Level-1: Initial –
No KPA’s defined.
Processes followed are adhoc and immature and are not well defined.
Unstable environment for software dvelopment.
No basis for predicting product quality, time for completion, etc.
Level-2: Repeatable –
Focuses on establishing basic project management policies.
Experience with earlier projects is used for managing new similar natured projects.
KPA’s:
Project Planning- It includes defining resources required, goals, constraints, etc. for the
project. It presents a detailed plan to be followed systematically for successful completion
of a good quality software.
45 | P a g e
Peer Reviews- In this method, defects are removed by using a number of review methods like
walkthroughs, inspections, buddy checks, etc.
Intergroup Coordination- It consists of planned interactions between different development
teams to ensure efficient and proper fulfilment of customer needs.
Organization Process Definition- It’s key focus is on the development and maintenance of the
standard development processes.
Organization Process Focus- It includes activities and practices that should be followed to
improve the process capabilities of an organization.
Training Programs- It focuses on the enhancement of knowledge and skills of the team
members including the developers and ensuring an increase in work efficiency.
Level-4: Managed –
At this stage, quantitative quality goals are set for the organization for software products as
well as software processes.
The measurements made help the organization to predict the product and process quality
within some limits defined quantitatively.
KPA’s:
Software development life cycle (SDLC) is a conceptual model for defining the tasks performed at
each step of software development process. Though there are various models for SDLC, but in
general SDLC comprises of following steps-
1. Preliminary Analysis
2. System analysis and Requirement definition
3. System Design
4. Development
5. Integration and System Testing
6. Installation, Operation and Acceptance Testing
7. Maintenance
8. Disposal
We will be discussing these steps in brief and how risk assessment and management is
incorporated in these steps to ensure less risk in software being developed.
1. Preliminary analysis:
In this step you need to find out:
1. The organization’s objective
2. Nature and scope of problem under study
3. Propose alternative solutions and proposals after having the deep understanding of problem
and what competitors are doing
4. Describe costs and benefits.
Support from Risk Management Activities –
1. Establish a process and responsibilities for risk management
2. Document Initial known risks
3. Project Manager should prioritize the risks
2. System analysis and requirement definition:
This step is very important for a clear understanding of customer expectation and requirement.
Thus it is very important to conduct this phase with utmost care and given due time as any possible
error will cause the failure of entire process. Following are the series of steps that are conducted
during this stage:
1. End user requirements are obtained through documentation, client interviews, observation
and questionnaires
2. Pros and cons of current system are identified so as to avoid the cons and carry forward the
pros in the new system.
3. Any Specific user proposals are used to prepare the specifications and solutions for the
shortcomings discovered in step two are found.
In general this step involves following risk management activities.
47 | P a g e
We will be discussing these phases in a bit details along with what risk factors are involved in
these sub phases.
4. Feasibility Study – This is the first and most important phase. Often this phase is conducted
as a standalone phase in big projects not as a sub phase under requirement definition phase.
This phase allows the team to get an estimate of major risk factors cost and time for a given
project. You might be wondering why this is so important? Feasibility study help us to get an
idea whether it is worthy to construct the system or not. It helps to identify main risk
factors.
Risk Factors –
Following is the list of risk factors for the feasibility study phase:
Project manager often make a mistake in estimating cost, time, resources and scope of
the project. Unrealistic budget, time, inadequate resources and unclear scope often
leads to project failure.
Unrealistic Budget: As discussed above inaccurate estimation of budget may lead to
project running out of funds early in the SDLC. Accurate estimation budget is directly
related to correct knowledge of time, effort and resources.
Unrealistic Schedule: Incorrect time estimation lead to a burden on developers by
project managers to deliver project on time. Thus compromising the overall quality of
the project and thus making the system less secure and more vulnerable.
Insufficient resources: In some case the technology, tools available are not up-to date
to meet project requirements or resources(people, tools, technology) available are not
enough to complete the project. In any case it is the project will will get delayed or in
worst case it may lead to project failure.
Unclear project scope: Clear understanding of what project is supposed to do, which
functionalities are important, which functionalities are mandatory, which functionalities
can be considered as extra is very important for project managers. Insufficient
knowledge of the system may lead to project failure.
5. Requirement Elicitation – It starts with analysis of application domain. This phase requires
the participation from different stake holders to ensure efficient, correct and complete
gathering of system services, its performance and constraints. This data set is then reviewed
and articulated to make it ready for the next phase.
Risk Factors –
Incomplete requirements: In 60% of the cases users are unable to state all
requirements in the beginning. Therefore requirements have the most dynamic nature in
the complete SDLC Process. If any of the user needs, constraints or other
functional/non functional requirements are not covered then the requirement set is said
to be incomplete.
48 | P a g e
Inaccurate requirements: If the requirement set do not reflect real user needs then in
that case requirements are said to be inaccurate.
Unclear requirements: Often in the process of SDLC there exists a communication gap
between users and developers. This ultimately affects the requirement set. If the
requirements stated by users are not understandable by analyst and developers then
these requirements are said to be unclear.
Ignoring non functional requirements: Sometimes developers and analyst ignore the fact
that non functional requirements hold equal importance as functional requirements. In
this confusion they focus on delivering what system should do rather than how system
should be like scalabilty, maintainability, testability etc.
Conflicting user requirements: Multiple users in a system may have different
requirements. If not listed and analysed carefully, this may lead to inconsistency in the
requirements.
Gold plating: It is very important to list out all requirements in the beginning. Adding
requirements later during the development may lead to threats in the system. Gold
plating is nothing but adding extra functionality to the system that were not considered
earlier. Thus inviting threats and making the system vulnerable.
Unclear description of real operating environment: Insufficient knowledge of real
operating environment leads to certain missed vulnerabilities thus threats remain
undetected until later stage of software development life cycle.
6. Requirement Analysis Activity – In this step requirements that are gathered by interviewing
users or brainstorming or by another means will be: first analysed and then classified and
organised such as functional and non functional requirements groups and then these are
prioritized to get a better knowledge of which requirements are of high priority and need to
be definitely present in the system. After all these steps requirements are negotiated.
Risk Factors –
Risk management in this step has following task to do:
Non verifiable requirements: If a finite cost effective process like testing, inspection
etc is not available to check whether software meets the requirement or not then that
requirement is said to be non verifiable.
Infeasible requirement: if sufficient resources are not available to successfully
implement the requirement then it is said to be an infeasible requirement.
Inconsistent requirement: If a requirement contradicts with any other requirement
then the requirement is said to be inconsistent.
Non traceable requirement: It is very important for every requirement to have an origin
source. During documentation it is necessary to write origin source of each requirement
so that it can traced back in future when required.
Unrealistic requirement: If a requirement meets all above criteria like it is complete,
accurate, consistent, traceable, verifiable etc then that requirement is realistic enough
to be documented and implemented.
7. Requirement Validation Activity – This involves validating the requirements that are
gathered and analyzed till now to check whether they actually defines what user want from
the system.
Risk Factors –
49 | P a g e
Risk Factors –
RD is not clear for developers: It is necessary for the developers to be involved in
requirements definition and analysis phase otherwise they won’t be having a good
understanding of the system to be developed. They will be unable to start designing on solid
understanding of the requirements of the system. Hence will land up creating a design for the
system other than the intended one.
2. Choosing the architectural design method activity – It is method to decompose system into
components. Thus it is a way to define software system components. There exist many
50 | P a g e
methods for architectural design like structured object oriented, Jackson system
development and formal methods. But there is no standard architectural design method.
Risk Factors –
Improper Architectural Design method: As discussed above there is no standard
architectural design method, one can choose most suitable method depending upon the
project’s need. But it is important to choose method with utmost care. If chosen incorrectly
it may result in problems in system implementation and integration. Even if implementation
and integration are successful it may be possible that the architectural design may not work
successfully on current machine. The choice of programming language depends upon the
architectural model chosen.
3. Choosing the programming language activity –Choosing programming language should be done
side by side with architectural method. As programming language should be compatible with
the architectural method chosen.
Risk Factors –
Improper choice of programming language: Incorrect choice of programming language may not
support chosen architectural method. Thus may reduce the maintainability and portability of
the system.
4. Constructing physical model activity – The physical model consisting of symbols is a
simplified description of hierarchical organized system.
Risk Factors –
Complex system: If the system to be developed is very large and complex then it will
create problems for developers. as developers will get confused and will not be able to
make out from where to start and how to decompose such large and complex system into
components.
Complicated design: For a large complex system it may be possible due to confusion and
lack of enough skills, developers may create a complicated design, which will be difficult
to implement.
Large size components: In case of large size components that are further decomposable
into sub components, may suffer difficulty in implementation and also poses difficulty
for assigning functions to these components.
Unavailability of expertise for reusability: Lack of proper expertise to determine the
components that can be reused pose a serious risk to project. As developing components
from scratch takes lot of time in comparison to reusing the components. Thus delaying
the projection completion.
Less reusable components: Incorrect estimation of reusable components during analysis
phase leads to two serious risk to the project- first delay in project completion and
second is budget over run. Developers will be surprised to see that the percentage of
the code that was considered ready, needs to be rewritten from scratch and it will
eventually make project budget over run.
5. Verifying design activity – Verifying design means to ensure that the design is the correct
solution for the system under construction and it meets all user requirements.
Risk Factors –
Difficulties in verifying design to requirements: Sometimes it is quite difficult for the
developer to check whether the proposed design meets all user requirements or not. In
51 | P a g e
order to make sure that design is correct solution for the system it is necessary that
design meets all requirements.
Many feasible solutions: When verifying design, developer may come across many
alternate solutions for the same problem Thus, for choosing best possible design that
meets all requirements is difficult. Choice depends upon the system and its nature.
Incorrect design: While verifying design it might be possible that the proposed design
either matches few requirements or no requirement at all. It may be possible that it is a
completely different design.
6. Specifying design activity – This activity involves following main tasks:
1. Identify the components and defines data flow between them
2. For each identified component state its function, data input, data output and
resource utilization.
Risk Factors –
Difficulty in allocating functions to components: Developers may face difficulty in
allocating functions to components in two cases- first when the system is not
decomposed correctly and secondly if the requirement documentation is not done
properly them in that case developers will find it difficult to identify functions for the
components as functional requirements constitute the functions of the components.
Extensive specification: Extensive specification of module processing should be avoided
to keep design document as small as possible.
Omitting data processing functions: Data processing functions like create, read are the
operations that components perform on data. Accidental omission of these functions
should be avoided.
7. Documenting design activity – In this phase design document(DD) is prepared. This will help
to control and coordinate the project during implementation and other phases.
Risk Factors –
Incomplete DD: Design document should be detailed enough explaining each component,
sub components, sub sub components in full detail so that developers may work
independently on different modules. If DD lacks this features then programmers cannot
work independently.
Inconsistent DD: If same function is carried out by more than one component. Then in
that case it will result in redundancy in design document and will eventually result in
inconsistent document.
Unclear DD: If the design document do not clearly define components and is written in
uncommon natural language, then in that case it might be difficult for the developers to
understand the proposed design.
Large DD: Design document should be detailed enough to list all components will full
details about functions, input, output, resources required etc. It should not contain
unnecessary information. Large design document will be difficult for the programmers
to understand .
4. Development:
This stage involves the actual coding of the software as per the agreed upon requirements
between developer and client.
Support from Risk Management Activities –
All designed controls are implemented during this stage.
52 | P a g e
1. Coding Activity – This step involves writing the source code for the system to be developed.
User interfaces are developed in this step. Each developed module is then tested in unit
testing step.
Risk Factors –
Unclear design document: If the design document is large and unclear it will be difficult
for the programmer to understand the document and to find out from where to start
coding.
Lack of independent working environment: Due to unclear and incomplete design
document it is difficult for the team of developers to assign independent modules to
work on.
Wrong user interface and user functions developed: Incomplete, inconsistent and
unclear design documents leads to wrongly implemented user interface and functions.
Poor user interface reduces the acceptability of the system in the real environment
among the customers.
Programming language incompatible with architectural design: choice of architectural
method solely drives the choice of programming language. They must be chosen in
sequence otherwise if incompatible, programming language may not work with the chosen
method.
Repetitive code: In large projects there arises need to rewrite the same piece of code
again and again. This consumes lot of time and also increase lines of code.
Modules developed by different programmers: In large projects, modules are divided
between the programmers. But different programmer has a different style and way of
thinking. This lead to a inconsistent, complex and ambiguous code.
2. Unit Testing Activity – Each module is tested individually to check whether it meets the
specified requirements or not and perform the functions it is intended to do.
Risk Factors –
Lack of fully automated testing tools: Even till today unit testing is not fully automated.
This makes testing process boring and monotonous. Testers don’t bother to generate all
possible test cases.
Code not understandable by reviewers: During unit testing, developers need to review
and make changes to the code. If the code is not understandable it will be very difficult
to update the code.
Coding drivers and stubs: During unit testing, modules need data from other module or
need to pass data to other module. As no module is completely independent in itself.
Stub is the piece pf code that replaces the module that accepts data from module being
tested. Driver is the piece of code that replaces the module that passes data to the
module being tested. Coding drivers and stubs consumes lot of time and effort. Since
these will not be delivered with the final system so these are considered extras.
Poor documentation of test cases: Test cases need to be documented properly so that
these can be used in future.
Testing team not experienced: Testing team is not experienced enough to handle the
automated tools and to write short concise code for drivers and stubs.
53 | P a g e
Poor regression testing: Regression testing means to rerun all successful test cases
when a change is made. This saves time and effort but it can be time consuming if all
test cases are selected for rerun.
This phase includes three activities: Integration Activity, Integration Testing Activity, System
Testing Activity. We will be discussing these activities in a bit detail along with risk factors in
each activity.
1. Integration Activity – In this phase individual units are combined into one working system.
Risk Factors –
Difficulty in combining components: Integration should be done incrementally else it will
very difficult to locate errors and bugs. The wrong sequence of integration will
eventually hamper the functionality for which the system was designed.
Integrate wrong versions of components: Developing a system involves writing multiple
versions of the same component. If incorrect version of the component is selected for
integration it may not produce the desired functionality.
Omissions: Integration of components should be done carefully. Single missed component
may result in error and bugs, that will be difficult to locate.
2. Integration Testing Activity – After integrating the components next step is to test
whether the components interface correctly and to evaluate their integration. This process is
known as integration testing.
Risk Factors –
Bugs during integration: If wrong versions of components are integrated or components
are accidentally omitted, then it will result in bugs and errors in the resultant system.
Data loss through interface: Wrong integration leads to a data loss between the
components where the number of parameters in the calling component do not match the
number of parameters in the called component.
Desired functionality not achieved: Errors and bugs introduced during integration
results in a system that fails to generate the desired functionality.
Difficulty in locating and repairing errors: If integration is not done incrementally, it
results in errors and bugs that are hard to locate. Even of the bugs are located, they
54 | P a g e
need to be fixed. Fixing error in one component may introduce error in other
components. Thus it becomes quite cumbersome to locate and repair errors.
3. System Testing Activity – In this step integrated system is tested to ensure that it meets
all the system requirements gathered from the users.
Risk Factors –
Unqualified testing team: Lack of good testing team is a major setback for a good
software as testers may misuse the available resources and testing tools.
Limited testing resources: Time, budget, tools if not used properly or unavailable may
delay project delivery.
Not possible to test in real environment: Sometimes it is not able to test system in the
real environment due to lack of budget, time constraints etc.
Testing cannot cope up with requirements change: Users requirements often change
during entire software development life cycle, so test cases should be designed to
handle such changes. If not designed properly they will not be able to cope up with
change.
System being tested is not testable enough: If the requirements are not verifiable,
then In that case it becomes quite difficult to test such system.
6. Installation, Operation and Acceptance Testing:
This is the last and longest phase in SDLC. In this system is delivered, installed, deployed and
tested for user acceptance.
Support from Risk Management Activities –
The system owner will want to ensure that the prescribed controls, including any physical or
procedural controls, are in place prior to the system going live. Decisions regarding risks identified
must be made prior to system operation.
This phase involves three activities: Installation, Operation, Acceptance Testing.
1. Installation Activity – The software system is delivered and installed at the customer site.
Risk Factors –
Problems in installation: If deployers are not experienced enough or if the system is
complex and distributed, then in that case it becomes difficult to install the software
system.
Change in environment: Sometimes the installed software system don’t work correctly in
the real environment, in some cases due to hardware advancement.
2. Operation Activity: Here end users are given training on how to use software system and its
services.
Risk Factors
New requirements emerge: While using system, sometimes users feel need to add new
requirements.
Difficulty in using system: Being a human it is always difficult in the beginning to accept
a change or we can say to accept a new system. But this should not go for a long
otherwise this will be a serious threat to acceptability of the system.
3. Acceptance Testing Activity – Delivered system is put into acceptance testing to check
whether it meets all user requirements or not.
Risk Factors –
55 | P a g e
User resistance to change: It is human behavior to resist any new change in the
surroundings. But for the success of a new delivered system it is very important that
the end users accept the system and start using it.
Too many software faults : Software faults should be discovered earlier before the
system operation phase, as discovery in the later phases leads to high cost in handling
these faults.
Insufficient data handling: New system should be developed keeping in mind the load of
user data it will have to handle in real environment.
Missing requirements: while using the system it might be possible that the end users
discover some of the requirements and capabilities are missing.
7. Maintenance:
In this stage, the system is assessed to ensure it does not become obsolete. This phase also
involves continuous evaluation of the system in terms of performance and changes are made time
to time to initial software to make it up-to date.
Errors, faults discovered during acceptance testing are fixed in this phase. This step involves
making improvements to the system, fixing errors, enhancing services and upgrading software.
A software project manager is the most important person inside a team who takes the overall
responsibilities to manage the software projects and play an important role in the successful
completion of the projects. A project manager has to face many difficult situations to accomplish
these works. In fact, the job responsibilities of a project manager range from invisible activities
like building up team morale to highly visible customer presentations. Most of the managers take
responsibility for writing the project proposal, project cost estimation, scheduling, project
staffing, software process tailoring, project monitoring and control, software configuration
management, risk management, managerial report writing and presentation and interfacing with
clients. The task of a project manager are classified into two major types:
1. Project planning
2. Project monitoring and control
Project planning
Project planning is undertaken immediately after the feasibility study phase and before the
starting of the requirement analysis and specification phase. Once a project has been found to be
feasible, Software project managers started project planning. Project planning is completed
before any development phase starts. Project planning involves estimating several characteristics
of a project and then plan the project activities based on these estimations. Project planning is
done with most care and attention. A wrong estimation can result in schedule slippage. Schedule
delay can cause customer dissatisfaction, which may lead to a project failure. For effective
project planning, in addition to a very good knowledge of various estimation techniques, past
experience is also very important. During the project planning the project manager performs the
following activities:
1. Project Estimation: Project Size Estimation is the most important parameter based on which
all other estimations like cost, duration and effort are made.
Cost Estimation: Total expenses to develop the software product is estimated.
Time Estimation: The total time required to complete the project.
Effort Estimation: The effort needed to complete the project is estimated.
The effectiveness of all later planning activities is dependent on the accuracy of these three
estimations.
2. Scheduling: After completion of estimation of all the project parameters, scheduling for
manpower and other resources are done.
3. Staffing: Team structure and staffing plans are made.
4. Risk Management: The project manager should identify the unanticipated risks that may
occur during project development risk, analysis the damage might cause these risks and take
risk reduction plan to cope up with these risks.
5. Miscellaneous plans: This includes making several other plans such as quality assurance plan,
configuration management plan, etc.
57 | P a g e
The order in which the planning activities are undertaken is shown in the below figure:
Role of a software project manager: There are many roles of a project manager in the
development of software.
Lead the team: The project manager must be a good leader who makes a team of different
members of various skills and can complete their individual task.
Motivate the team-member: One of the key roles of a software project manager is to
encourage team member to work properly for the successful completion of the project.
Tracking the progress: The project manager should keep an eye on the progress of the
project. A project manager must track whether the project is going as per plan or not. If any
problem arises, then take necessary action to solve the problem. Moreover, check whether
the product is developed by maintaining correct coding standards or not.
Liaison: Project manager is the link between the development team and the customer. Project
manager analysis the customer requirements and convey it to the development team and keep
telling the progress of the project to the customer. Moreover, the project manager checks
whether the project is fulfilling the customer requirements or not.
Documenting project report: The project manager prepares the documentation of the
project for future purpose. The reports contain detailed features of the product and various
techniques. These reports help to maintain and enhance the quality of the project in the
future.
Necessary skills of software project manager: A good theoretical knowledge of various project
management technique is needed to become a successful project manager, but only theoretical
knowledge is not enough. Moreover, a project manager must have good decision-making abilities,
good communication skills and the ability to control the team members with keeping a good rapport
with them and the ability to get the work done by them. Some skills such as tracking and
controlling the progress of the project, customer interaction, good knowledge of estimation
techniques and previous experience are needed.
Skills that are the most important to become a successful project manager are given below:
58 | P a g e
Project Management Complexities refer to the various difficulties to manage a software project.
It recognizes in many different ways. The main goal of software project management is to enable a
group of developers to work effectively towards the successful completion of a project in a given
time. But software project management is a very difficult task. Earlier many projects have failed
due to faulty project management practices. Management of software projects is much more
complex than management of many other types of projects.
Types of Complexity:
Time Management Complexity: Complexities to estimate the duration of the project. It also
includes the complexities to make the schedule for different activities and timely completion
of the project.
Cost Management Complexity: Estimating the total cost of the project is a very difficult
task and another thing is to keep an eye that the project does not overrun the budget.
Quality Management Complexity: The quality of the project must satisfy the customer
requirements. It must assure that the requirements of the customer are fulfilled.
Risk Management Complexity: Risks are the unanticipated things that may occur during any
phases of the project. Various difficulties may occur to identify these risks and make
amendment plans to reduce the effects of these risks.
Human Resources Management Complexity: It includes all the difficulties regarding
organizing, managing and leading the project team.
Communication Management Complexity: All the members must interact with all the other
members and there must be a good communication with the customer.
Procurement Management Complexity: Projects need many services from third party to
complete the task. These may increase the complexity of the project to acquire the services.
Integration Management Complexity: The difficulties regarding to coordinate processes and
develop a proper project plan. Many changes may occur during the project development and it
may hamper the project completion, which increases the complexity.
Main factors in Software project management complexity:
Invisibility: Until the development of a software project is complete, Software remains
invisible. Anything that is invisible, is difficult to manage and control. Software project
manager cannot view the progress of the project due to the invisibility of the software until
it is completely developed. The project manager can monitor the modules of the software
that have been completed by the development team and the documents that have been
prepared, which are a rough indicator of the progress achieved. Thus the invisibility causes a
major problem to the complexity of managing a software project.
59 | P a g e
The reliability growth group of models measures and predicts the improvement of reliability
programs through the testing process. The growth model represents the reliability or failure rate
of a system as a function of time or the number of test cases. Models included in this group are as
following below.
1. Coutinho Model –
Coutinho adapted the Duane growth model to represent the software testing process.
60 | P a g e
Coutinho plotted the cumulative number of deficiencies discovered and the number of
correction actions made vs the cumulative testing weeks on log-log paper.
2. Wall and Ferguson Model –
Wall and Ferguson proposed a model similar to the Weibull growth model for predicting the
failure rate of software during testing.
The Jelinski-Moranda (J-M) model is one of the earliest software reliability models. Many existing
software reliability models are variants or extensions of this basic model.
Assumptions:
The assumptions in this model include the following:
1. The program contains N initial faults which is an unknown but fixed constant.
2. Each fault in the program is independent and equally likely to cause a failure during a test.
3. Time intervals between occurrences of failure are independent of each other.
4. Whenever a failure occurs, a corresponding fault is removed with certainty.
5. The fault that causes a failure is assumed to be instantaneously removed, and no new faults
are inserted during the removal of the detected fault.
6. The software failure rate during a failure interval is constant and is proportional to the
number of faults remaining in the program.
Goel-Okumoto Model
The Goel-Okumoto model (also called as exponential NHPP model) is based on the following
assumptions:
1. All faults in a program are mutually independent of the failure detection point of view.
2. The number of failures detected at any time is proportional to the current number of faults
in a program. This means that the probability of the failures for faults actually occurring, i.e.,
detected, is constant.
3. The isolated faults are removed prior to future test occasions.
4. Each time a software failure occurs, the software error which caused it is immediately
removed, and no new errors are introduced.
61 | P a g e
Software Maintenance
Software Maintenance is the process of modifying a software product after it has been delivered
to the customer. The main purpose of software maintenance is to modify and update software
application after delivery to correct faults and to improve performance.