Unit-5
Unit-5
Software Maintenance refers to the process of modifying and updating a software system after it has been
delivered to the customer. It is a critical part of the software development life cycle (SDLC) and is necessary
to ensure that the software continues to meet the needs of the users over time. This article focuses on
discussing Software Maintenance in detail.
What is Software Maintenance?
Software maintenance is a continuous process that occurs throughout the entire life cycle of the software
system.
• The goal of software maintenance is to keep the software system working correctly, efficiently,
and securely, and to ensure that it continues to meet the needs of the users.
• This can include fixing bugs, adding new features, improving performance, or updating the
software to work with new hardware or software systems.
• It is also important to consider the cost and effort required for software maintenance when
planning and developing a software system.
• It is important to have a well-defined maintenance process in place, which includes testing and
validation, version control, and communication with stakeholders.
• It’s important to note that software maintenance can be costly and complex, especially for large
and complex systems. Therefore, the cost and effort of maintenance should be taken into account
during the planning and development phases of a software project.
• It’s also important to have a clear and well-defined maintenance plan that includes regular
maintenance activities, such as testing, backup, and bug fixing.
Several Key Aspects of Software Maintenance
1. Bug Fixing: The process of finding and fixing errors and problems in the software.
2. Enhancements: The process of adding new features or improving existing features to meet the
evolving needs of the users.
3. Performance Optimization: The process of improving the speed, efficiency, and reliability of the
software.
4. Porting and Migration: The process of adapting the software to run on new hardware or software
platforms.
5. Re-Engineering: The process of improving the design and architecture of the software to make it
more maintainable and scalable.
6. Documentation: The process of creating, updating, and maintaining the documentation for the
software, including user manuals, technical specifications, and design documents.
Several Types of Software Maintenance
1. Corrective Maintenance: This involves fixing errors and bugs in the software system.
2. Patching: It is an emergency fix implemented mainly due to pressure from management. Patching
is done for corrective maintenance but it gives rise to unforeseen future errors due to lack of
proper impact analysis.
3. Adaptive Maintenance: This involves modifying the software system to adapt it to changes in
the environment, such as changes in hardware or software, government policies, and business
rules.
4. Perfective Maintenance: This involves improving functionality, performance, and reliability, and
restructuring the software system to improve changeability.
5. Preventive Maintenance: This involves taking measures to prevent future problems, such as
optimization, updating documentation, reviewing and testing the system, and implementing
preventive measures such as backups.
Need for Maintenance
Software Maintenance must be performed in order to:
• Correct faults.
• Improve the design.
• Implement enhancements.
• Interface with other systems.
• Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
• Migrate legacy software.
• Retire software.
• Requirement of user changes.
• Run the code fast
Challenges in Software Maintenance
The various challenges in software maintenance are given below:
• The popular age of any software program is taken into consideration up to ten to fifteen years. As
software program renovation is open-ended and might maintain for decades making it very
expensive.
• Older software programs, which had been intended to paint on sluggish machines with much less
reminiscence and garage ability can not maintain themselves tough in opposition to newly coming
more advantageous software programs on contemporary-day hardware.
• Changes are frequently left undocumented which can also additionally reason greater conflicts in
the future.
• As the era advances, it turns into high prices to preserve vintage software programs.
• Often adjustments made can without problems harm the authentic shape of the software program,
making it difficult for any next adjustments.
• There is a lack of Code Comments.
• Lack of documentation: Poorly documented systems can make it difficult to understand how the
system works, making it difficult to identify and fix problems.
• Legacy code: Maintaining older systems with outdated technologies can be difficult, as it may
require specialized knowledge and skills.
• Complexity: Large and complex systems can be difficult to understand and modify, making it
difficult to identify and fix problems.
• Changing requirements: As user requirements change over time, the software system may need
to be modified to meet these new requirements, which can be difficult and time-consuming.
• Interoperability issues: Systems that need to work with other systems or software can be difficult
to maintain, as changes to one system can affect the other systems.
• Lack of test coverage: Systems that have not been thoroughly tested can be difficult to maintain
as it can be hard to identify and fix problems without knowing how the system behaves in
different scenarios.
• Lack of personnel: A lack of personnel with the necessary skills and knowledge to maintain the
system can make it difficult to keep the system up-to-date and running smoothly.
• High-Cost: The cost of maintenance can be high, especially for large and complex systems, which
can be difficult to budget for and manage.
Categories of Software Maintenance
Maintenance can be divided into the following categories.
• Corrective maintenance: Corrective maintenance of a software product may be essential either to
rectify some bugs observed while the system is in use, or to enhance the performance of the
system.
• Adaptive maintenance: This includes modifications and updations when the customers need the
product to run on new platforms, on new operating systems, or when they need the product to
interface with new hardware and software.
• Perfective maintenance: A software product needs maintenance to support the new features that
the users want or to change different types of functionalities of the system according to the
customer’s demands.
• Preventive maintenance: This type of maintenance includes modifications and updations to
prevent future problems with the software. It goals to attend to problems, which are not significant
at this moment but may cause serious issues in the future.
Reverse Engineering
Reverse Engineering is the process of extracting knowledge or design information from anything man-
made and reproducing it based on the extracted information. It is also called back engineering. The main
objective of reverse engineering is to check out how the system works. There are many reasons to perform
reverse engineering. Reverse engineering is used to know how the thing works. Also, reverse engineering
is to recreate the object by adding some enhancements.
Software Reverse Engineering
Software Reverse Engineering is the process of recovering the design and the requirements specification of
a product from an analysis of its code. Reverse Engineering is becoming important, since several existing
software products, lack proper documentation, are highly unstructured, or their structure has degraded
through a series of maintenance efforts.
Why Reverse Engineering?
• Providing proper system documentation.
• Recovery of lost information.
• Assisting with maintenance.
• The facility of software reuse.
• Discovering unexpected flaws or faults.
• Implements innovative processes for specific use.
• Easy to document the things how efficiency and power can be improved.
Uses of Software Reverse Engineering
• Software Reverse Engineering is used in software design, reverse engineering enables the
developer or programmer to add new features to the existing software with or without
knowing the source code.
• Reverse engineering is also useful in software testing, it helps the testers to study or detect the
virus and other malware code.
• Software reverse engineering is the process of analyzing and understanding the internal
structure and design of a software system. It is often used to improve the understanding of a
software system, to recover lost or inaccessible source code, and to analyze the behavior of a
system for security or compliance purposes.
• Malware analysis: Reverse engineering is used to understand how malware works and to
identify the vulnerabilities it exploits, in order to develop countermeasures.
• Legacy systems: Reverse engineering can be used to understand and maintain legacy systems
that are no longer supported by the original developer.
• Intellectual property protection: Reverse engineering can be used to detect and prevent
intellectual property theft by identifying and preventing the unauthorized use of code or other
assets.
• Security: Reverse engineering is used to identify security vulnerabilities in a system, such as
backdoors, weak encryption, and other weaknesses.
• Compliance: Reverse engineering is used to ensure that a system meets compliance standards,
such as those for accessibility, security, and privacy.
• Reverse-engineering of proprietary software: To understand how a software works, to
improve the software, or to create new software with similar features.
• Reverse-engineering of software to create a competing product: To create a product that
functions similarly or to identify the features that are missing in a product and create a new
product that incorporates those features.
• It’s important to note that reverse engineering can be a complex and time-consuming process,
and it is important to have the necessary skills, tools, and knowledge to perform it effectively.
Additionally, it is important to consider the legal and ethical implications of reverse
engineering, as it may be illegal or restricted in some jurisdictions.
It has some downsides as well which led to the development of DVS. The most obvious is the single point
of failure that the centralized repository represents if it goes down during that period collaboration and
saving versioned changes is not possible. What if the hard disk of the central database becomes corrupted,
and proper backups haven’t been kept? You lose absolutely everything.
Distributed Version Control Systems: Distributed version control systems contain multiple repositories.
Each user has their own repository and working copy. Just committing your changes will not give others
access to your changes. This is because commit will reflect those changes in your local repository and you
need to push them in order to make them visible on the central repository. Similarly, When you update, you
do not get others’ changes unless you have first pulled those changes into your repository.
To make your changes visible to others, 4 things are required:
• You commit
• You push
• They pull
• They update
The most popular distributed version control systems are Git, and Mercurial. They help us overcome the
problem of single point of failure.
Purpose of Version Control:
• Multiple people can work simultaneously on a single project. Everyone works on and edits their
own copy of the files and it is up to them when they wish to share the changes made by them with
the rest of the team.
• It also enables one person to use multiple computers to work on a project, so it is valuable even if
you are working by yourself.
• It integrates the work that is done simultaneously by different members of the team. In some rare
cases, when conflicting edits are made by two people to the same line of a file, then human
assistance is requested by the version control system in deciding what should be done.
• Version control provides access to the historical versions of a project. This is insurance against
computer crashes or data loss. If any mistake is made, you can easily roll back to a previous
version. It is also possible to undo specific edits that too without losing the work done in the
meanwhile. It can be easily known when, why, and by whom any part of a file was edited.
COCOMO Model
Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e., the number of Lines of
Code. This article focuses on discussing the Cocomo Model in detail.
2. Semi-detached
A software project is said to be a Semi-detached type if the vital characteristics such as team size,
experience, and knowledge of the various programming environments lie in between organic and
embedded. The projects classified as Semi-Detached are comparatively less familiar and difficult to
develop compared to the organic ones and require more experience better guidance and creativity. Eg:
Compilers or different Embedded Systems can be considered Semi-Detached types.
3. Embedded
A software project requiring the highest level of complexity, creativity, and experience requirement falls
under this category. Such software requires a larger team size than the other two models and also the
developers need to be sufficiently experienced and creative to develop such complex models.
Different models of Cocomo have been proposed to predict the cost estimation at different levels, based
on the amount of accuracy and correctness required. All of these models can be applied to a variety of
projects, whose characteristics determine the value of the constant to be used in subsequent
calculations. These characteristics of different system types are mentioned below. Boehm’s definition of
organic, semidetached, and embedded systems:
1. The effort is measured in Person-Months and as evident from the formula is dependent on Kilo-
Lines of code. The development time is measured in months.
2. These formulas are used as such in the Basic Model calculations, as not much consideration of
different factors such as reliability, and expertise is taken into account, henceforth the estimate is
rough.
Below are the programs for Basic COCOMO:
C++JavaPython3C#JavaScript
// C++ program to implement basic COCOMO
#include <bits/stdc++.h>
// semi-detached
else if (size > 50 && size <= 300)
model = 1;
// embedded
else if (size > 300)
model = 2;
// Calculate Effort
effort = table[model][0] * pow(size,
table[model][1]);
// Calculate Time
time = table[model][2] * pow(effort,
table[model][3]);
// Driver code
int main()
{
float table[3][4] = {2.4, 1.05, 2.5, 0.38, 3.0, 1.12,
2.5, 0.35, 3.6, 1.20, 2.5, 0.32};
char mode[][15]
= {"Organic", "Semi-Detached", "Embedded"};
int size = 4;
return 0;
}
Output
The mode is Organic
Effort = 10.289 Person-Month
Development Time = 6.06237 Months
Average Staff Required = 2 Persons
2. Intermediate Model
The basic Cocomo model assumes that the effort is only a function of the number of lines of code and
some constants evaluated according to the different software systems. However, in reality, no system’s
effort and schedule can be solely calculated based on Lines of Code. For that, various other factors such
as reliability, experience, and Capability. These factors are known as Cost Drivers and the Intermediate
Model utilizes 15 such drivers for cost estimation. Classification of Cost Drivers and their Attributes:
Product attributes:
1. Required software reliability extent
2. Size of the application database
3. The complexity of the product
4. Run-time performance constraints
5. Memory constraints
6. The volatility of the virtual machine environment
7. Required turnabout time
8. Analyst capability
9. Software engineering capability
10. Application experience
11. Virtual machine experience
12. Programming language experience
13. Use of software tools
14. Application of software engineering methods
15. Required development schedule
CASE Studies and Examples
1. NASA Space Shuttle Software Development: NASA estimated the time and money needed to build
the software for the Space Shuttle program using the COCOMO model. NASA was able to make well-
informed decisions on resource allocation and project scheduling by taking into account variables
including project size, complexity, and team experience.
2. Big Business Software Development: The COCOMO model has been widely used by big
businesses to project the time and money needed to construct intricate business software systems.
These organizations were able to better plan and allocate resources for their software projects by using
COCOMO’s estimation methodology.
3. Commercial Software goods: The COCOMO methodology has proven advantageous for software
firms that create commercial goods as well. These businesses were able to decide on pricing, time-to-
market, and resource allocation by precisely calculating the time and expense of building new software
products or features.
4. Academic Research Initiatives: To estimate the time and expense required to create software
prototypes or carry out experimental studies, academic research initiatives have employed COCOMO.
Researchers were able to better plan their projects and allocate resources by using COCOMO’s
estimate approaches.
Advantages of the COCOMO Model
1. Systematic cost estimation: Provides a systematic way to estimate the cost and effort of a
software project.
2. Helps to estimate cost and effort: This can be used to estimate the cost and effort of a software
project at different stages of the development process.
3. Helps in high-impact factors: Helps in identifying the factors that have the greatest impact on the
cost and effort of a software project.
4. Helps to evaluate the feasibility of a project: This can be used to evaluate the feasibility of a
software project by estimating the cost and effort required to complete it.
Disadvantages of the COCOMO Model
1. Assumes project size as the main factor: Assumes that the size of the software is the main factor
that determines the cost and effort of a software project, which may not always be the case.
2. Does not count development team-specific characteristics: Does not take into account the
specific characteristics of the development team, which can have a significant impact on the cost and
effort of a software project.
3. Not enough precise cost and effort estimate: This does not provide a precise estimate of the cost
and effort of a software project, as it is based on assumptions and averages.
Resource Allocation
To assign the available resources in an economic way is known as resource allocation. The planning of the
activities and the resource required by these activities while taking into consideration both resources
availability and project time is termed as resource allocation in project management.
There are 2 parts of resource allocation: Strategic Planning, and Resource Leveling. These are explained as
following below.
1. Strategic planning –
In strategic planning resource allocation is a plan for using available resources, for example human
resources, specially in the near term, to achieve goals for the future. It is the process of allocating
resources among various projects or business units. The strategic planning has 2 parts.
1. There is the basic allocation decision.
2. There is the contingency mechanism.
The basic allocation decision is the choice of which items to fund in the plan and what level of fund in
it should receive and which to leave unfunded; the resources are located to some items and not to others.
There may be contingency mechanism such as priority ranking of items excluded from the plan, showing
which items are to be sacrificed to reduce total funding.
2. Resource Leveling –
The main objective is to smooth resource requirement by shifting slack jobs beyond periods of peak
requirement. Some of the methods essentially replicate what a human scheduler do if he has enough
time, procedures design especially for the computer. They of course depend for their success on the
speed and capabilities of electronic compilers.
Approach for resource allocation :
There are number of approaches to solve resource allocation problems:
1. Manual Approach
2. Algorithmic Approach
3. Combination of both
In algorithmic approach resource is allocated by using some computer program which is defined for a
specific domain, this will automatically and dynamically distribute resources to the user. Electronic devices
dedicated to routing and communication is commonly use this method. For example: channel allocation in
wireless communication may be decided by base transceiver using an appropriate algorithm.
What is Risk Management?
A risk is a probable problem; it might happen, or it might not. There are main two characteristics of risk.
• Uncertainty: the risk may or may not happen which means there are no 100% risks.
• Loss: If the risk occurs in reality, undesirable results or losses will occur.
One of the solutions that organization may have, The team uses collaborative tools and procedures, such
as shared work boards or project management software, to make sure that each member of the team is
aware of all tasks and responsibilities, including those of their teammates.
An organization must focus on providing resources to minimize the negative effects of possible events
and maximize positive results in order to reduce risk effectively. Organizations can more effectively
identify, assess, and mitigate major risks by implementing a consistent, systematic, and integrated
approach to risk management.
3. Business Risks:
This type of risk embodies the risks of building a superb product that nobody needs, losing monetary
funds or personal commitments, etc.
Classification of Risk in a project
Example: Let us consider a satellite-based mobile communication project. The project manager can
identify many risks in this project. Let us classify them appropriately.
• What if the project cost escalates and overshoots what was estimated? – Project Risk
• What if the mobile phones that are developed become too bulky to conveniently
carry? Business Risk
• What if call hand-off between satellites becomes too difficult to implement? Technical Risk
Risk management standards and frameworks
Risk management standards and frameworks give organizations guidelines on how to find, evaluate, and
handle risks effectively. They provide a structured way to manage risks, making sure that everyone
follows consistent and reliable practices. Here are some well-known risk management standards and
frameworks:
1. COSO ERM Framework:
COSO ERM Framework was introduce in 2004 and updated in 2017. Its main purpose is to
addresses the growing complexity of Enterprise Risk Management (ERM).
• Key Features:
• 20 principles grouped into five components: Governance and culture, Strategy and
objective-setting, Performance, Review and revision, Information, communication, and
reporting.
• It promote integrating risk into business strategies and operations.
2. ISO 31000:
ISO 31000 was introduce in 2009, revised in 2018. It provides principles and a framework for ERM.
• Key Features:
• It offers guidance on applying risk management to operations.
• It focuses on identifying, evaluating, and mitigating risks.
• It promote senior management’s role and integrating risk management across the
organization.
3. BS 31100:
This framework is British Standard for Risk Management and latest version issued in 2001. It offers a
structured approach to applying the principles outlined in ISO 31000:2018, covering tasks like
identifying, evaluating, and addressing risks, followed by reporting and reviewing risk management
efforts.
Benefits of risk management
Here are some benefits of risk management:
• Helps protect against potential losses.
• Improves decision-making by considering risks.
• Reduces unexpected expenses.
• Ensures adherence to laws and regulations.
• Builds resilience against unexpected challenges.
• Safeguards company reputation.