Software Engineering
Software Engineering
ENGINEERING
2. It is a rapidly evolving field, and new tools and technologies are constantly
being developed to improve the software development process.
6. Software Engineering ensures that the software that has to be built should be
consistent, correct, also on budget, on time, and within the required
requirements.
Key Principles of Software Engineering
1. Modularity: Breaking the software into smaller, reusable components that can
be developed and tested independently.
5. Maintenance: Regularly updating and improving the software to fix bugs, add
new features, and address security vulnerabilities.
6. Testing: Verifying that the software meets its requirements and is free of bugs.
2. Reliability: It assures that the product will deliver the same results when used
in similar working environment.
3. Reusability: This attribute makes sure that the module can be used in multiple
applications.
There is a dual role of software in the industry. The first one is as a product and the
other one is as a vehicle for delivering the product. We will discuss both of them.
1. As a Product
2. Efficiency: The software should not make wasteful use of computing devices
such as memory, processor cycles, etc.
5. Testability: Here software facilitates both the establishment of test criteria and
the evaluation of the software concerning those criteria.
7. Portability: In this case, the software can be transferred from one computer
system or environment to another.
8. Adaptability: In this case, the software allows differing system constraints and
the user needs to be satisfied by making changes to the software.
While Software Engineering offers many advantages, there are also some potential
disadvantages to consider:
1. High upfront costs: Implementing a systematic and disciplined approach to
software development can be resource-intensive and require a significant
investment in tools and training.
5. Limited creativity: The focus on structure and process can stifle creativity and
innovation among developers.
6. High learning curve: The development process can be complex, and it
requires a lot of learning and training, which can be challenging for new
developers.
Functionality:
It refers to the degree of performance of the software against its intended purpose.
Functionality refers to the set of features and capabilities that a software program or
system provides to its users. It is one of the most important characteristics of
software, as it determines the usefulness of the software for the intended purpose.
Examples of functionality in software include:
A set of attributes that bears on the capability of software to maintain its level of
performance under the given condition for a stated period of time.
It refers to the ability of the software to use system resources in the most effective
and efficient manner. The software should make effective use of storage space and
executive command as per desired timing requirements.
Efficiency is a characteristic of software that refers to its ability to use resources such
as memory, processing power, and network bandwidth in an optimal way. High
efficiency means that a software program can perform its intended functions quickly
and with minimal use of resources, while low efficiency means that a software
program may be slow or consume excessive resources.
Examples of factors that can affect the efficiency of the software include:
It refers to the extent to which the software can be used with ease. the amount of
effort or time required to learn how to use the software.
Maintainability:
It refers to the ease with which modifications can be made in a software system to
extend its functionality, improve its performance, or correct errors.
Portability:
A set of attributes that bears on the ability of software to be transferred from one
environment to another, without minimum changes.
Characteristics of “Software” in Software Engineering
In both activities, high quality is achieved through good design, but the
manufacturing phase for hardware can introduce quality problems than
software.
2. The software doesn’t “wear out.”:
A software part should be planned and carried out with the goal that it
tends to be reused in various projects.
The information from this analysis forms the building blocks of a basic project. The
quality of the project is a result of planning. Thus, in this stage, the basic project is
designed with all the available information.
Stage-1 : Planning and Requirement Analysis
Stage-2: Defining Requirements
In this stage, all the requirements for the target software are specified. These
requirements get approval from customers, market analysts, and stakeholders.
This is fulfilled by utilizing SRS (Software Requirement Specification). This is a sort
of document that specifies all those things that need to be defined and created
during the entire project cycle.
This DDS is assessed by market analysts and stakeholders. After evaluating all the
possible factors, the most practical and logical design is chosen for development.
Stage 3: Design
Stage-4: Developing Product
At this stage, the fundamental development of the product starts. For this,
developers use a specific programming code as per the design in the DDS. Hence, it
is important for the coders to follow the protocols set by the association.
Conventional programming tools like compilers, interpreters, debuggers, etc. are
also put into use at this stage. Some popular languages like C/C++, Python, Java,
etc. are put into use as per the software regulations.
Stage 4: Development
Stage-5: Product Testing and Integration
After the development of the product, testing of the software is necessary to ensure
its smooth execution. Although, minimal testing is conducted at every stage of
SDLC. Therefore, at this stage, all the probable flaws are tracked, fixed, and
retested. This ensures that the product confronts the quality requirements of SRS.
Documentation, Training, and Support: Software documentation is an essential
part of the software development life cycle. A well-written document acts as a tool
and means to information repository necessary to know about software processes,
functions, and maintenance. Documentation also provides information about how to
use the product. Training in an attempt to improve the current or future employee
performance by increasing an employee’s ability to work through learning, usually by
changing his attitude and developing his skills and understanding.
Stage 5: Testing
Stage-6: Deployment and Maintenance of Products
After detailed testing, the conclusive product is released in phases as per the
organization’s strategy. Then it is tested in a real industrial environment. It is
important to ensure its smooth performance. If it performs well, the organization
sends out the product as a whole. After retrieving beneficial feedback, the company
releases it as it is or with auxiliary improvements to make it further helpful for the
customers. However, this alone is not enough. Therefore, along with the deployment,
the product’s supervision.
Unit – II Software requirement specification:
Software is the set of instructions in the form of programs to govern the computer
system and to process the hardware components. To produce a software product the
set of activities is used. This set is called a software process.
What are Software Processes?
Each process has its own set of advantages and disadvantages, and the choice of
which one to use depends on the specific project and organization.
Components of Software
3. User interface: the means by which the user interacts with the software, such
as buttons, menus, and text fields.
4. Libraries: pre-written code that can be reused by the software to perform
common tasks.
5. Documentation: information that explains how to use and maintain the
software, such as user manuals and technical guides.
6. Test cases: a set of inputs, execution conditions, and expected outputs that
are used to test the software for correctness and reliability.
7. Configuration files: files that contain settings and parameters that are used
to configure the software to run in a specific environment.
8. Build and deployment scripts: scripts or tools that are used to build,
package, and deploy the software to different environments.
9. Metadata: information about the software, such as version numbers, authors,
and copyright information.
All these components are important for software development, testing and
deployment.
Key Process Activities
1. Clarity and Simplicity: The linear form of the Waterfall Model offers a simple
and unambiguous foundation for project development.
2. Clearly Defined Phases: The Waterfall Model’s phases each have unique
inputs and outputs, guaranteeing a planned development with obvious
checkpoints.
3. Documentation: A focus on thorough documentation helps with software
comprehension, upkeep, and future growth.
4. Stability in Requirements: Suitable for projects when the requirements are
clear and steady, reducing modifications as the project progresses.
5. Resource Optimization: It encourages effective task-focused work without
continuously changing contexts by allocating resources according to project
phases.
6. Relevance for Small Projects: Economical for modest projects with simple
specifications and minimal complexity.
Phases of SDLC Waterfall Model – Design
The classical waterfall model divides the life cycle into a set of phases. This
model considers that one phase can be started after the completion of the
previous phase. That is the output of one phase will be the input to the next
phase. Thus the development process can be considered as a sequential flow
in the waterfall. Here the phases do not overlap with each other. The different
sequential phases of the classical waterfall model are shown in the below
figure.
Let us now learn about each of these phases in detail which include further phases.
1. Feasibility Study:
The goal of this phase is to convert the requirements acquired in the SRS into
a format that can be coded in a programming language. It includes high-level
and detailed design as well as the overall software architecture. A Software
Design Document is used to document all of this effort (SDD).
4. Coding and Unit Testing:
In the coding phase software design is translated into source code using any
suitable programming language. Thus each designed module is coded. The
unit testing phase aims to check whether each module is working properly or
not.
5. Integration and System testing:
Maintenance is the most important phase of a software life cycle. The effort
spent on maintenance is 60% of the total effort spent to develop a full
software. There are three types of maintenance.
Corrective Maintenance: This type of maintenance is carried out to correct
errors that were not discovered during the product development phase.
Perfective Maintenance: This type of maintenance is carried out to enhance
the functionalities of the system based on the customer’s request.
Adaptive Maintenance: Adaptive maintenance is usually required for porting
the software to work in a new environment such as working on a new
computer platform or with a new operating system.
The Classical Waterfall Model suffers from various shortcomings we can’t use it in
real projects, but we use other software development lifecycle models which are
based on the classical waterfall model. Below are some major drawbacks of this
model.
No Feedback Path: In the classical waterfall model evolution of software from
one phase to another phase is like a waterfall. It assumes that no error is ever
committed by developers during any phase. Therefore, it does not incorporate
any mechanism for error correction.
Difficult to accommodate Change Requests: This model assumes that all
the customer requirements can be completely and correctly defined at the
beginning of the project, but the customer’s requirements keep on changing
with time. It is difficult to accommodate any change requests after the
requirements specification phase is complete.
No Overlapping of Phases: This model recommends that a new phase can
start only after the completion of the previous phase. But in real projects, this
can’t be maintained. To increase efficiency and reduce cost, phases may
overlap.
Limited Flexibility: The Waterfall Model is a rigid and linear approach to
software development, which means that it is not well-suited for projects with
changing or uncertain requirements. Once a phase has been completed, it is
difficult to make changes or go back to a previous phase.
Limited Stakeholder Involvement: The Waterfall Model is a structured and
sequential approach, which means that stakeholders are typically involved in
the early phases of the project (requirements gathering and analysis) but may
not be involved in the later phases (implementation, testing, and deployment).
Late Defect Detection: In the Waterfall Model, testing is typically done
toward the end of the development process. This means that defects may not
be discovered until late in the development process, which can be expensive
and time-consuming to fix.
Lengthy Development Cycle: The Waterfall Model can result in a lengthy
development cycle, as each phase must be completed before moving on to
the next. This can result in delays and increased costs if requirements change
or new issues arise.
When to Use the SDLC Waterfall Model?
Here are some cases where the use of the Waterfall Model is best suited:
Well-understood Requirements: Before beginning development, there are
precise, reliable, and thoroughly documented requirements available.
Very Little Changes Expected: During development, very little adjustments
or expansions to the project’s scope are anticipated.
Small to Medium-Sized Projects: Ideal for more manageable projects with a
clear development path and little complexity.
Predictable: Projects that are predictable, low-risk, and able to be addressed
early in the development life cycle are those that have known, controllable
risks.
Regulatory Compliance is Critical: Circumstances in which paperwork is of
utmost importance and stringent regulatory compliance is required.
Client Prefers a Linear and Sequential Approach: This situation describes
the client’s preference for a linear and sequential approach to project
development.
Limited Resources: Projects with limited resources can benefit from a set-up
strategy, which enables targeted resource allocation.
The Waterfall approach involves little client engagement in the product development
process. The product can only be shown to end consumers when it is ready.
Incremental Process Model – Software Engineering
First, a simple working system implementing only a few basic features is built and
then that is delivered to the customer. Then thereafter many successive iterations/
versions are implemented and delivered to the customer until the desired system is
released.
A, B, and C are modules of Software Products that are incrementally developed and
delivered.
Phases of incremental model
Requirements of Software are first broken down into several modules that can be
incrementally constructed and delivered.
Different subsystems are developed at the same time. It can decrease the calendar
time needed for the development, i.e. TTM (Time to Market) if enough resources are
available.
Parallel Development Model
When to use the Incremental Process Model
Error Reduction (core modules are used by the customer from the
beginning of the phase and then these are tested thoroughly).
Uses divide and conquer for a breakdown of tasks.
2. To create a final complete system, partial systems are constructed one after
the other.
3. Priority requirements are addressed first.
4. The requirements for that increment are frozen once they are created.
Advantages of the Incremental Process Model
3. Issues may arise from the system design if all needs are not gathered upfront
throughout the program lifecycle.
4. Every iteration step is distinct and does not flow into the next.
5. It takes a lot of time and effort to fix an issue in one unit if it needs to be
corrected in all the units.
The Spiral Model is a Software Development Life Cycle (SDLC) model that
provides a systematic and iterative approach to software development. In its
diagrammatic representation, looks like a spiral with many loops. The exact number
of loops of the spiral is unknown and can vary from project to project. Each loop of
the spiral is called a phase of the software development process.
3. It is based on the idea of a spiral, with each iteration of the spiral representing
a complete software development cycle, from requirements gathering and
analysis to design, implementation, testing, and maintenance.
What Are the Phases of the Spiral Model?
The Spiral Model is a risk-driven model, meaning that the focus is on managing risk
through multiple iterations of the software development process. It consists of the
following phases:
1. Objectives Defined: In first phase of the spiral model we clarify what the
project aims to achieve, including functional and non-functional requirements.
2. Risk Analysis: In the risk analysis phase, the risks associated with the
project are identified and evaluated.
3. Engineering: In the engineering phase, the software is developed based on
the requirements gathered in the previous iteration.
4. Evaluation: In the evaluation phase, the software is evaluated to determine if
it meets the customer’s requirements and if it is of high quality.
5. Planning: The next iteration of the spiral begins with a new planning phase,
based on the results of the evaluation.
The Spiral Model is often used for complex and large software development projects,
as it allows for a more flexible and adaptable approach to software development. It is
also well-suited to projects with significant uncertainty or high levels of risk.
The Radius of the spiral at any point represents the expenses (cost) of the project so
far, and the angular dimension represents the progress made so far in the current
phase.
Each phase of the Spiral Model is divided into four quadrants as shown in the
above figure. The functions of these four quadrants are discussed below:
A risk is any adverse situation that might affect the successful completion of a
software project. The most important feature of the spiral model is handling these
unknown risks after the project has started. Such risk resolutions are easier done by
developing a prototype.
1. The spiral model supports coping with risks by providing the scope to build a
prototype at every phase of software development.
2. The Prototyping Model also supports risk handling, but the risks must be
identified completely before the start of the development work of the project.
3. But in real life, project risk may occur after the development work starts, in
that case, we cannot use the Prototyping Model.
4. In each phase of the Spiral Model, the features of the product dated and
analyzed, and the risks at that point in time are identified and are resolved
through prototyping.
5. Thus, this model is much more flexible compared to other SDLC models.
Why Spiral Model is called Meta Model?
The Spiral model is called a Meta-Model because it subsumes all the other SDLC
models. For example, a single loop spiral actually represents the Iterative Waterfall
Model.
1. The spiral model incorporates the stepwise approach of the Classical
Waterfall Model.
2. The spiral model uses the approach of the Prototyping Model by building a
prototype at the start of each phase as a risk-handling technique.
3. Also, the spiral model can be considered as supporting the Evolutionary
model – the iterations along the spiral can be considered as evolutionary
levels through which the complete system is built.
Advantages of the Spiral Model
The most serious issue we face in the cascade model is that taking a long length to
finish the item, and the product became obsolete. To tackle this issue, we have
another methodology, which is known as the Winding model or spiral model. The
winding model is otherwise called the cyclic model.
When To Use the Spiral Model?
5. The spiral approach is beneficial for projects with moderate to high risk.
6. The SDLC’s spiral model is helpful when requirements are complicated and
ambiguous.
Spiral Model is a valuable choice for software development projects where risk
management is on high priority. Spiral Model deliver high-quality software by
promoting risk identification, iterative development and continuous client feedback.
When a project is vast in software engineering, a spiral model is utilized.
Prototyping Model – Software Engineering
Prototyping Model-Concept
In this process model, the system is partially implemented before or during the
analysis phase thereby allowing the customers to see the product early in the life
cycle. The process starts by interviewing the customers and developing the
incomplete high-level paper model. This document is used to build the initial
prototype supporting only the basic functionality as desired by the customer. Once
the customer figures out the problems, the prototype is further refined to eliminate
them. The process continues until the user approves the prototype and finds the
working model to be satisfactory. For those looking to implement prototyping in their
software development process, the System Design Course offers practical insights
and strategies to effectively apply this model in real-world projects.
Steps of Prototyping Model
Step 1: Requirement Gathering and Analysis: This is the initial step in designing a
prototype model. In this phase, users are asked about what they expect or what they
want from the system.
Step 2: Quick Design: This is the second step in the Prototyping Model. This model
covers the basic design of the requirement through which a quick overview can be
easily described.
Step 3: Build a Prototype: This step helps in building an actual prototype from the
knowledge gained from prototype design.
Step 4: Initial User Evaluation: This step describes the preliminary testing where
the investigation of the performance model occurs, as the customer will tell the
strengths and weaknesses of the design, which was sent to the developer.
Step 5: Refining Prototype: If any feedback is given by the user, then improving the
client’s response to feedback and suggestions, the final system is approved.
Step 6: Implement Product and Maintain: This is the final step in the phase of the
Prototyping Model where the final system is tested and distributed to production,
here the program is run regularly to prevent failures.
For more, you can refer to Software Prototyping Model Phases.
There are four types of Prototyping Models, which are described below.
Extreme Prototyping
1. Rapid Throwaway Prototyping
This technique offers a useful method of exploring ideas and getting customer
feedback for each of them.
Customer feedback helps prevent unnecessary design faults and hence, the
final prototype developed is of better quality.
2. Evolutionary Prototyping
This is because developing a prototype from scratch for every iteration of the
process can sometimes be very frustrating for the developers.
3. Incremental Prototyping
In the end, when all individual pieces are properly developed, then the
different prototypes are collectively merged into a single final product in their
predefined order.
It’s a very efficient approach that reduces the complexity of the development
process, where the goal is divided into sub-parts and each sub-part is
developed individually.
The time interval between the project’s beginning and final delivery is
substantially reduced because all parts of the system are prototyped and
tested simultaneously.
Of course, there might be the possibility that the pieces just do not fit together
due to some lack of ness in the development phase – this can only be fixed by
careful and complete plotting of the entire system before prototyping starts.
4. Extreme Prototyping
This method is mainly used for web development. It consists of three sequential
independent phases:
In this phase, a basic prototype with all the existing static pages is presented
in HTML format.
In the 2nd phase, Functional screens are made with a simulated data process
using a prototype services layer.
This is the final step where all the services are implemented and associated
with the final prototype.
This Extreme Prototyping method makes the project cycling and delivery robust and
fast and keeps the entire developer team focused and centralized on product
deliveries rather than discovering all possible needs and specifications and adding
necessitated features.
Advantages of Prototyping Model
The customers get to see the partial product early in the life cycle. This
ensures a greater level of customer satisfaction and comfort.
Errors can be detected much earlier thereby saving a lot of effort and cost,
besides enhancing the quality of the software.
Flexibility in design.
Early feedback from customers and stakeholders can help guide the
development process and ensure that the final product meets their needs and
expectations.
Prototyping can be used to test and validate design decisions, allowing for
adjustments to be made before significant resources are invested in
development.
Prototyping can help reduce the risk of project failure by identifying potential
issues and addressing them early in the process.
Prototyping can help bridge the gap between technical and non-technical
stakeholders by providing a tangible representation of the product.
Disadvantages of the Prototyping Model
Costly concerning time as well as money.
There may be too much variation in requirements each time the prototype is
evaluated by the customer.
Poor Documentation due to continuously changing customer requirements.
The customer might lose interest in the product if he/she is not satisfied with
the initial prototype.
The prototype may not be scalable to meet the future needs of the customer.
The prototype may not accurately represent the final product due to limited
functionality or incomplete features.
The focus on prototype development may shift away from the final product,
leading to delays in the development process.
The prototype may give a false sense of completion, leading to the premature
release of the product.
The prototype may not consider technical feasibility and scalability issues that
can arise during the final product development.
The prototype may not reflect the actual business requirements of the
customer, leading to dissatisfaction with the final product.
Applications of Prototyping Model
The Prototyping Model should be used when the requirements of the product
are not clearly understood or are unstable.
The prototyping model can also be used if requirements are changing quickly.
This model can be successfully used for developing user interfaces, high-
technology software-intensive systems, and systems with complex algorithms
and interfaces.
The prototyping Model is also a very good choice to demonstrate the technical
feasibility of the product.
Spiral Model vs Prototype Model
It is also referred to as a
It is also referred to as a meta
Also Known As rapid or closed-ended
model.
prototype.
1. Requirements
2. Quick Design
1. Planning Phase
3. Build Prototype
2. Risk Analysis Phase
Phases 4. User Evaluation
3. Engineering Phase
5. Refining Prototype
4. Evaluation Phase
6. Implement and
Maintain
Cost-effective quality
Cost-effective quality
Cost-Effective improvement is very much
improvement is not possible.
possible.
Continuous risk
When end users need
analysis is required for
to have high
the software
interaction like in
online platforms and In large projects
When to Use web interfaces.
If Significant changes
Whenever end-user are required by the
input in terms of software
feedback on the
In complex project
system is required.
requirements
Fast development
End users are highly
involved in the whole Development of all
Advantages development process. phases is carried out in
controlled manner
Errors, complexities
get easily identified Customer feedback is
taken into account for
Aspect Prototype Model Spiral Model
Useful in rapidly
changing requirements
All four factors: People, Product, Process and Project are important for success of
project. Their relative importance helps us organize development activities in more
scientific and professional way.
4. Interpretation: The evaluation of metrics results in insight into the quality of the
representation.
Software Metrics
1. Planning
2. Organizing
3. Controlling
4. Improving
1 Product Metrics: Product metrics are used to evaluate the state of the product,
tracing risks and undercover prospective problem areas. The ability of the team
to control quality is evaluated. Examples include lines of code, cyclomatic
complexity, code coverage, defect density, and code maintainability index.
Productivity
6. It helps to determine the complexity of the code and to test the code with
resources.
3. Sometimes the quality of the product is not met with the expectation.
Program Analysis Tool is an automated tool whose input is the source code or the
executable code of a program and the output is the observation of characteristics of
the program. It gives various characteristics of the program such as its size,
complexity, adequacy of commenting, adherence to programming standards and
many other characteristics. These tools are essential to software engineering
because they help programmers comprehend, improve and maintain software
systems over the course of the whole development life cycle.
Importance of Program Analysis Tools
1. Finding faults and Security Vulnerabilities in the Code: Automatic
programme analysis tools can find and highlight possible faults, security
flaws and bugs in the code. This lowers the possibility that bugs will get it into
production by assisting developers in identifying problems early in the
process.
2. Memory Leak Detection: Certain tools are designed specifically to find
memory leaks and inefficiencies. By doing so, developers may make sure that
their software doesn’t gradually use up too much memory.
3. Vulnerability Detection: Potential vulnerabilities like buffer overflows,
injection attacks or other security flaws can be found using programme
analysis tools, particularly those that are security-focused. For the
development of reliable and secure software, this is essential.
4. Dependency analysis: By examining the dependencies among various
system components, tools can assist developers in comprehending and
controlling the connections between modules. This is necessary in order to
make well-informed decisions during refactoring.
5. Automated Testing Support: To automate testing procedures, CI/CD
pipelines frequently combine programme analysis tools. Only well-tested,
high-quality code is released into production thanks to this integration, helping
in identifying problems early in the development cycle.
Classification of Program Analysis Tools
Static Program Analysis Tool is such a program analysis tool that evaluates and
computes various characteristics of a software product without executing it.
Normally, static program analysis tools analyze some structural representation of a
program to reach a certain analytical conclusion. Basically some structural properties
are analyzed using static program analysis tools. The structural properties that are
usually analyzed are:
Code walkthroughs and code inspections are considered as static analysis methods
but static program analysis tool is used to designate automated analysis tools.
Hence, a compiler can be considered as a static program analysis tool.
2. Dynamic Program Analysis Tools
Dynamic Program Analysis Tool is such type of program analysis tool that require the
program to be executed and its actual behavior to be observed. A dynamic program
analyzer basically implements the code. It adds additional statements in the source
code to collect the traces of program execution. When the code is executed, it allows
us to observe the behavior of the software for different test cases. Once the software
is tested and its behavior is observed, the dynamic program analysis tool performs a
post execution analysis and produces reports which describe the structural coverage
that has been achieved by the complete testing process for the program.
For example, the post execution dynamic analysis report may provide data on extent
statement, branch and path coverage achieved. The results of dynamic program
analysis tools are in the form of a histogram or a pie chart. It describes the structural
coverage obtained for different modules of the program. The output of a dynamic
program analysis tool can be stored and printed easily and provides evidence that
complete testing has been done. The result of dynamic analysis is the extent of
testing performed as white box testing. If the testing result is not satisfactory then
more test cases are designed and added to the test scenario. Also dynamic analysis
helps in elimination of redundant test cases.
Software Requirement Specification (SRS) Format
In order to form a good SRS, here you will see some points that can be used and
should be considered to form a structure of good Software Requirements
Specification (SRS). These are below mentioned in the table of contents and are well
explained below.
Software Requirement Specification (SRS) Format as the name suggests, is a
complete specification and description of requirements of the software that need to
be fulfilled for the successful development of the software system. These
requirements can be functional as well as non-functional depending upon the type of
requirement. The interaction between different customers and contractors is done
because it is necessary to fully understand the needs of customers.
In this, possible outcome of software system which includes effects due to operation
of program is fully explained. All functional requirements which may include
calculations, data processing, etc. are placed in a ranked order. Functional
requirements specify the expected behavior of the system-which outputs should be
produced from the given inputs. They describe the relationship between the input
and output of the system. For each functional requirement, detailed description all
the data inputs and their source, the units of measure, and the range of valid inputs
must be specified.
Interface Requirements
In this, software interfaces which mean how software program communicates with
each other or users either in form of any language, code, or message are fully
described and explained. Examples can be shared memory, data streams, etc.
Performance Requirements
In this, how a software system performs desired functions under specific condition is
explained. It also explains required time, required memory, maximum error rate, etc.
The performance requirements part of an SRS specifies the performance constraints
on the software system. All the requirements relating to the performance
characteristics of the system must be clearly specified. There are two types of
performance requirements: static and dynamic. Static requirements are those that do
not impose constraint on the execution characteristics of the system. Dynamic
requirements specify constraints on the execution behaviour of the system.
Design Constraints
In this, constraints which simply means limitation or restriction are specified and
explained for design team. Examples may include use of a particular algorithm,
hardware and software limitations, etc. There are a number of factors in the client’s
environment that may restrict the choices of a designer leading to design constraints
such factors include standards that must be followed resource limits, operating
environment, reliability and security requirements and policies that may have an
impact on the design of the system. An SRS should identify and specify all such
constraints.
Non-Functional Attributes
In this, non-functional attributes are explained that are required by software system
for better performance. An example may include Security, Portability, Reliability,
Reusability, Application compatibility, Data integrity, Scalability capacity, etc.
Preliminary Schedule and Budget
In this, initial version and budget of project plan are explained which include overall
time duration required and overall cost required for development of project.
Appendices
Test plans are generated by testing group based on the describe external
behaviour.
in documentation purpose.
What is Monitoring and Control in Project Management?
Monitoring and control is one of the key processes in any project management which
has great significance in making sure that business goals are achieved successfully.
We are seeing All points and Subpoints in a Detailed way:
These processes enable the ability to supervise, make informed decisions, and
adjust in response to changes during the project life cycle are critical.
What is Monitoring Phase in Project Management?
1. Track Progress: Monitor the actual implementation of the project along with
indicators such as designs, timelines budgets, and standards.
2. Identify Risks and Issues: Identify other risks and possible issues in the
early stage to create immediate intervention measures as well as resolutions.
3. Ensure Resource Efficiency: Monitor how resources are being distributed
and used to improve efficiency while avoiding resource shortages.
4. Facilitate Decision-Making: Supply project managers and stakeholders with
reliable and timely information for informed
5. Enhance Communication: Encourage honest team communication and
stakeholder engagement related to project status, challenges
Key Activities
In project management, the control stage refers to taking corrective measures using
data collected during monitoring. It seeks to keep the project on track and in line with
its purpose by resolving issues, minimizing risks, and adopting appropriate
modifications into plan documents for projects.
Purpose
The integration starts with continuous feedback loops between the monitoring and
control. Measuring allows real time information on project advancements, risks and
resource utilization as a foundation for control decision making.
2. Establishing Key Performance Indicators (KPIs)
First, identify and check KPIs that are relevant for the project goals. These
parameters act as performance measures and deviations standards which give the
base for control phase to make corrections.
3. Early Identification of Risks and Issues
Using continuous monitoring, the problems are identified in early stages of their
emergence. Through this integration, the organization is able to be proactive where
project teams can implement timely and effective compliance measures keeping
these risks from becoming major issues.
4. Real-Time Data Analysis
During the monitoring phase, use sophisticated instruments to analyze data in real-
time. Some technologies, including artificial intelligence and machine learning as well
as data analytics help to understand what the trends, patterns or anomalies are of
project dynamics for better control.
5. Proactive Change Management
Create project plans that can be modified based on changes established during
monitoring. Bringing control in means working with schedules, resource allocations,
and objectives that can be changed depending on the nature of conditions
while project plans remain flexible.
8. Agile Methodologies
The use of agile methodologies enhances integration even more. Agile principles
prioritize iterative development, continual feedback, and flexible planning in
accordance with monitoring-control integration.
9. Documentation and Lessons Learned
It is vital to note insights from the phases of monitoring and control. This
documentation enables future projects to use lessons learned as a resource, fine-
tune the strategy for monitoring and optimize control processes systems on an
ongoing basis.
Benefits of Effective Monitoring and Control
Proper monitoring and control processes play an important role in the success of
projects that are guided by project management. Here are key advantages
associated with implementing robust monitoring and control measures:
1. Timely Issue Identification and Resolution: Prompt resolution of issues is
possible if they are detected early. Monitoring and control effectiveness see
early challenges, thus preventing the escalation into serious problems likely to
affect project timelines or overall objectives.
2. Optimized Resource Utilization: Monitoring and controlling resource
allocation and use ensures optimum efficiency. Teams can detect resources
underutilized or overallocated, thereby allocating adjusting towards a balance
workload and efficient use of resource.
3. Risk Mitigation: A continuous monitoring approach aids proactive risk
management. Identification of future risks at an early stage enables
establishment of mitigation plans for the project teams to reduce likelihood
and severity levels that often lead adverse events on projects.
4. Adaptability to Changes: Effective monitoring highlights shifts in project
requirements, influences outside the system or stakeholder expectations.
Control processes enable a smooth adjustment of project plans to reflect the
ongoing change, thus minimizing resistance.
5. Improved Decision-Making: As the monitoring processes provide accurate
and real-time data, decision making can be improved. Stakeholders and
project managers can base their decisions on the most current of information,
thereby facilitating more strategic choices that result in better outcomes.
6. Enhanced Communication and Transparency: Frequent communication of
the status, progress and issues supports transparency. The shareholders are
kept with updated information, and this results in the build-up of trust among
the team members’ clients to other interested parties.
7. Quality Assurance: The monitoring and control processes also help in the
quality assurance of project deliverables. Therefore, through continuous
tracking and management of quality metrics, teams can find any deviations
from the standards to take timely corrective actions that meet stakeholders’
needs.
8. Cost Control: Cost overruns, in turn, could be mitigated through continuous
monitoring of project budgets and expenses accompanied by the control
processes. Teams can spot variances early and take corrective actions to
ensure that the project stays within budget limit.
9. Efficient Stakeholder Management: Monitoring and control allows for
providing timely notice about the project’s progress and any changes to
interested parties. This preemptive approach increases the satisfaction of
Stakeholders while reducing misconception.
10. Continuous Improvement: Improvement continues as lessons learned
through monitoring and control activities are applied. Teams can learn from
past projects, understand what needs to improve, and implement good
practices in future initiatives establishing an atmosphere of constant
development.
11. Increased Predictability: Monitoring and control that is effective make
project outcomes better predictable. The accurate timelines, costs and risk
forecasts are attained through closely controlling project activities which the
teams manage to provide effective stakeholders with a clear understanding of
all their projects expectations.
12. Project Success and Client Satisfaction: Finally, the result of successful
monitoring and control is project success. The final result of the projects
satisfaction for clients and positive outcomes from that project.
Challenges and Solutions
Challenge: Lack of sufficient control can lead to scope creep that affects
overall timelines and costs.
Solution: Implement rigid change control procedures, review project scope
on a regular basis and ensure that all changes are appropriately evaluated
assessed approved documented.
3. Communication Breakdowns
Challenge: Poordiscussions are often based on misunderstandings, delays
and unresolved matters.
Solution: Set up proper communication channels, use collaboration tools and
have regular meetings about the project’s status to ensure productive
communication between team members and stakeholders.
4. Resource Constraints
Challenge: During the project lifecycle, new risks can surfaced that had not
been previously identified.
Solution: Apply a risk management approach that is responsive, reassess
risks regularly and ensure contingency plans are in place to cope with the
unexpected.
7. Resistance to Change
Challenge: Lack of proper training and skill deficiencies among the team
members pose a threat to effective use of monitoring and control mechanism.
Solution: Offer wide training opportunities, point out and resolve the areas of
deficiency as well as build curiosity for continuous learning with a view to
increase effectiveness in project team.
10. Lack of Standardized Processes
In the final analysis, successful project management is based upon the incorporation
of efficient monitoring and control processes. The symbiotic relationship between
these two phases, creates a dynamic framework that allows for adaptability
transparency and informed decision-making throughout the project life cycle.
Design means to draw or plan something to show the look, functions and working of
it.
Software Design is also a process to plan or convert the software requirements into
a step that are needed to be carried out to develop a software system. There are
several principles that are used to organize and arrange the structural components
of Software design. Software Designs in which these principles are applied affect the
content and the working process of the software from the beginning.
6. Accommodate change –
The software should be designed in such a way that it accommodates the
change implying that the software should adjust to the change that is required
to be done as per the user’s need.
7. Degrade gently –
The software should be designed in such a way that it degrades gracefully
which means it should work properly even if an error occurs during the
execution.
8. Assessed or quality –
The design should be assessed or evaluated for the quality meaning that
during the evaluation, the quality of the design needs to be checked and
focused on.
The design phase of software development deals with transforming the customer
requirements as described in the SRS documents into a form implementable using a
programming language. The software design process can be divided into the
following three levels or phases of design:
1. Interface Design
2. Architectural Design
3. Detailed Design
Elements of a System
2. Precise description of the events or messages that the system must produce.
3. Specification of the data, and the formats of the data coming into and going
out of the system.
3. Component Interfaces.
The architectural design adds important details ignored during the interface design.
Design of the internals of the major components is ignored until the last phase of the
design.
Detailed Design
Detailed design is the specification of the internal elements of all major system
components, their properties, relationships, processing, and often their algorithms
and the data structures. The detailed design may include:
1. Decomposition of major system components into program units.
3. User interfaces.
If the system is having hierarchical architecture, the program structure can easily be
partitioned both horizontally and vertically, figure (a), represents this view.
In given figure (a), horizontal division defines the individual branches of the modular
hierarchy for every major program function. Control modular (shown by rectangles)
are used to coordinate communication between tasks. The three partitions are done
in simple horizontal partitions i.e., input, data transformation (processing) and output.
The following benefits are provided by horizontal partitioning –
The basic behavior of the program is much less likely to change. That is why,
vertically partitioned structures are less susceptible to side effects due to changes
and thus be more maintainable, which is its key quality factor.
Introduction of Software Design Process – Set 2
The following items are designed and documented during the design phase:
Concepts are defined as a principal idea or invention that comes into our mind or in
thought to understand something. The software design concept simply means the
idea or principle behind the design. It describes how you plan to solve the problem of
designing software, and the logic, or thinking behind how you will design software. It
allows the software engineer to create the model of the system software or product
that is to be developed or built. The software design concept provides a supporting
and essential structure or model for developing the right software. There are many
concepts of software design and some of them are given below:
Points to be Considered While Designing Software
2. Then break the parts into parts soon and now each of parts will be easy to
do.
Advantages:
At each step of refinement, new parts will become less complex and therefore
easier to solve.
Parts of the solution may turn out to be reusable.
Breaking problems into parts allows more than one person to solve the
problem.
Make decisions about reusable low-level utilities then decide how there will be
put together to create high-level construct. ,
S.
No. TOP DOWN APPROACH BOTTOM UP APPROACH
Pros-
Easier isolation of
interface errors
Pros-
It benefits in the case
Easy to create test conditions
error occurs towards the
9. top of the program. Test results are easy to observe
Cons- Cons-
C
C++
Java
C#
..etc
On the contrary, in the Assembly languages like Microprocessor 8085, etc, the
statements do not get executed in a structured manner. It allows jump statements
like GOTO. So the program flow might be random. The structured program mainly
consists of three types of elements:
Selection Statements
Sequence Statements
Iteration Statements
The structured program consists of well structured and separated modules. But the
entry and exit in a Structured program is a single-time event. It means that the
program uses single-entry and single-exit elements. Therefore a structured program
is well maintained, neat and clean program. This is the reason why the Structured
Programming Approach is well accepted in the programming world.
Advantages of Structured Programming Approach:
2. User Friendly
3. Easier to Maintain
2. The converted machine code is not the same as for assembly language.
We start with a high-level description of what the program does. Then, in each step,
we take one part of our high-level description and refine it. Refinement is actually a
process of elaboration. The process should proceed from a highly conceptual model
to lower-level details. The refinement of each module is done until we reach the
statement level of our programming language.
What is Object Oriented Design?
Verification
Verification is the process of checking that software achieves its goal without any
bugs. It is the process to ensure whether the product that is developed is right or not.
It verifies whether the developed product fulfills the requirements that we have.
Verification is simply known as Static Testing.
Static Testing
Inspections
Reviews
Walkthroughs
Desk-checking
Validation
Validation is the process of checking whether the software product is up to the mark
or in other words product has high-level requirements. It is the process of checking
the validation of the product i.e. it checks what we are developing is the right
product. it is a validation of actual and expected products. Validation is simply known
as Dynamic Testing.
Dynamic Testing
3. Unit Testing
4. Integration Testing
Differences between Verification and Validation
Verification Validation
Type of
Verification is the static testing. Validation is dynamic testing.
Testing
Validation is executed on
Quality assurance team does
software code with the help
verification.
Responsibility of testing team.
Verification Validation
Both coupling and cohesion are important factors in determining the maintainability,
scalability, and reliability of a software system. High coupling and low cohesion can
make a system difficult to change and test, while low coupling and high cohesion
make a system easier to maintain and improve.
Basically, design is a two-part iterative process. The first part is Conceptual Design
which tells the customer what the system will do. Second is Technical Design which
allows the system builders to understand the actual hardware and software needed
to solve a customer’s problem.
It is independent of implementation.
Network architecture
Shows interface.
Cohesion is a measure of the degree to which the elements of the module are
functionally related. It is the degree to which all elements directed towards
performing a single task are contained in the component. Basically, cohesion is the
internal glue that keeps the module together. A good software design will have high
cohesion.
Types of Cohesion
Better error isolation: High cohesion reduces the likelihood that a change in
one part of a module will affect other parts, making it easier to
Improved reliability: High cohesion leads to modules that are less prone to
errors and that function more consistently,
Increased code duplication: Low cohesion can lead to the duplication of code,
as elements that belong together are split into separate modules.
Reduced functionality: Low cohesion can result in modules that lack a clear
purpose and contain elements that don’t belong together, reducing their
functionality and making them harder to maintain.
Difficulty in understanding the module: Low cohesion can make it harder for
developers to understand the purpose and behavior of a module, leading to
errors and a lack of clarity.
Conclusion
In conclusion, it’s good for software to have low coupling and high cohesion. Low
coupling means the different parts of the software don’t rely too much on each other,
which makes it safer to make changes without causing unexpected problems. High
cohesion means each part of the software has a clear purpose and sticks to it,
making the code easier to work with and reuse. Following these principles helps
make software stronger, more adaptable, and easier to grow.
What is Fourth Generation Programming Language?
The language which is used to create programs is called a programming language. It
comprises a set of instructions that are used to produce various kinds of output. A
Fourth Generation Programming Language (4GL) is designed to make coding easier
and faster for people by using more human-friendly commands, compared to older
programming languages. In this article, we are going to discuss fourth-generation
programming language in detail.
What is Fourth Generation Programming Language?
Self-generator system.
Form generators.
Codeless programming.
Data management.
Advantages of 4GL
Functional independence is a key of good design and design is the key to software
quality. So we strive in most designs to make the modules independent of
one another. Not only is it easier to understand how a module works but is also
much easier to modify an independent module. similarly when a system failure is
traced back through the code to the design, independent modules help to isolate
and fix the cause.
To recognize and measure the degree of module independence in a design two
qualitative criteria are defined cohesion and coupling. We will discuss them in next
two section. Much work has been done on the functional independence. Parens and
Wirth have defined some refinement techniques in landmark paper that improves
the modules independence on software design. Stevens Myers and Constantine
elaborated this concept further.
The process of breaking down a software into multiple independent modules where
each module is developed separately is called Modularization.
Effective modular design can be achieved if the partitioned modules are separately
solvable, modifiable as well as compilable. Here separate compilable modules
means that after making changes in a module there is no need of recompiling the
whole software system.
Cohesion:
Cohesion is a measure of strength in relationship between various functions within a
module. It is of 7 types which are listed below in the order of high to low cohesion:
1. Functional cohesion
2. Sequential cohesion
3. Communicational cohesion
4. Procedural cohesion
5. Temporal cohesion
6. Logical cohesion
7. Co-incidental cohesion
Coupling:
Coupling is a measure of strength in relationship between various modules within a
software. It is of 6 types which are listed below in the order of low to high coupling:
1. Data Coupling
2. Stamp Coupling
3. Control Coupling
4. External Coupling
5. Common Coupling
6. Content Coupling
The software needs an architectural design to represent the design of the software.
IEEE defines architectural design as “the process of defining a collection of hardware
and software components and their interfaces to establish the framework for the
development of a computer system.” The software that is built for computer-based
systems can exhibit one of these many architectural styles.
System Category Consists of
Semantic models that help the designer to understand the overall properties
of the system.
The use of architectural styles is to establish a structure for all the components of the
system.
A data store will reside at the center of this architecture and is accessed
frequently by the other components that update, add, delete, or modify the
data present within the store.
This kind of architecture is used when input data is transformed into output
data through a series of computational manipulative components.
The figure represents pipe-and-filter architecture since it uses both pipe and
filter and it has a set of components called filters connected by lines.
Pipes are used to transmitting data from one component to the next.
Each filter will work independently and is designed to take data input of a
certain form and produces data output to the next filter of a specified form.
The filters don’t require any knowledge of the working of neighboring filters.
If the data flow degenerates into a single line of transforms, then it is termed
as batch sequential. This structure accepts the batch of data and then applies
a series of sequential components to transform it.
Advantages of Data Flow architecture:
Data flow architecture does not allow applications that require greater user
engagement.
It is not easy to coordinate two different but related streams
It is used to create a program that is easy to scale and modify. Many sub-styles exist
within this category. Two of them are explained below.
Remote procedure call architecture: This components is used to present in
a main program or sub program architecture distributed among multiple
computers on a network.
Main program or Subprogram architectures: The main program structure
decomposes into number of subprograms or function into a control hierarchy.
Main program contains number of subprograms that can invoke other
components.
The components of a system encapsulate data and the operations that must be
applied to manipulate the data. The coordination and communication between the
components are established via the message passing.
Characteristics of Object Oriented architecture:
Object protect the system’s integrity.
An object is unaware of the depiction of other items.
Advantage of Object Oriented architecture:
Other objects are aware of the implementation details of the object, allowing
changes to be made without having an impact on other objects.
5] Layered architecture
A number of different layers are defined with each layer performing a well-
defined set of operations. Each layer will do some operations that becomes
closer to machine instruction set progressively.
At the outer layer, components will receive the user interface operations and
at the inner layers, components will perform the operating system
interfacing(communication and coordination with OS)
Unit – IV Coding
INFORMATION HIDING:
Encapsulation:
Benefits:
Implementation techniques:
Different modules specified in the design document are coded in the Coding phase
according to the module specification. The main goal of the coding phase is to code
from the design document prepared after the design phase through a high-level
language and then to unit test this code.
What is Coding Standards and Guidelines?
Good software development organizations want their programmers to maintain to
some well-defined and standard style of coding called coding standards. They
usually make their own coding standards and guidelines depending on what suits
their organization best and based on the types of software they develop. It is very
important for the programmers to maintain the coding standards otherwise the code
will be rejected during code review.
Purpose of Having Coding Standards
Modification history
Different functions supported in the module along with their input output
parameters
The names of the function should be written in camel case starting with
small letters.
The name of the function must describe the reason of using the
function clearly and briefly.
4. Indentation: Proper indentation is very important to increase the readability of
the code. For making the code readable, programmers should use White
spaces properly. Some of the spacing conventions are given below:
All braces should start from a new line and the code following the end
of braces also start from a new line.
5. Error return values and exception handling conventions: All functions that
encountering an error condition should either return a 0 or 1 for simplifying the
debugging.
Coding Guidelines in Software Engineering
Coding guidelines give some general suggestions regarding the coding style that to
be followed for the betterment of understandability and readability of the code.
Some of the coding guidelines are given below :
1. Avoid using a coding style that is too difficult to understand: Code
should be easily understandable. The complex code makes maintenance and
debugging difficult and expensive.
2. Avoid using an identifier for multiple purposes: Each variable should be
given a descriptive and meaningful name indicating the reason behind using
it. This is not possible if an identifier is used for multiple purposes and thus it
can lead to confusion to the reader. Moreover, it leads to more difficulty during
future enhancements.
3. Code should be well documented: The code should be properly
commented for understanding easily. Comments regarding the statements
increase the understandability of the code.
4. Length of functions should not be very large: Lengthy functions are very
difficult to understand. That’s why functions should be small enough to carry
out small work and lengthy functions should be broken into small ones for
completing small tasks.
5. Try not to use GOTO statement: GOTO statement makes the program
unstructured, thus it reduces the understandability of the program and also
debugging becomes difficult.
Advantages of Coding Guidelines
1. Coding guidelines increase the efficiency of the software and reduces the
development time.
2. Coding guidelines help in detecting errors in the early phases, so it helps to
reduce the extra cost incurred by the software project.
3. If coding guidelines are maintained properly, then the software code increases
readability and understandability thus it reduces the complexity of the code.
This process ensures that the application can handle all exceptional and boundary
cases, providing a robust and reliable user experience. By systematically identifying
and fixing issues, software testing helps deliver high-quality software that performs
as expected in various scenarios. The process of software testing aims not only at
finding faults in the existing software but also at finding measures to improve the
software in terms of efficiency, accuracy, and usability. The article focuses on
discussing Software Testing in detail. Software Testing is a method to assess the
functionality of the software program. The process checks whether the actual
software matches the expected requirements and ensures the software is bug-free.
The purpose of software testing is to identify the errors, faults, or missing
requirements in contrast to actual requirements. It mainly aims at measuring the
specification, functionality, and performance of a software program or application.
1. Verification: It refers to the set of tasks that ensure that the software correctly
implements a specific function. It means “Are we building the product right?”.
2. Validation: It refers to a different set of tasks that ensure that the software
that has been built is traceable to customer requirements. It means “Are we
building the right product?”.
Importance of Software Testing
Software bugs can cause potential monetary and human loss. There are many
examples in history that clearly depicts that without the testing phase in software
development lot of damage was incurred. Below are some examples:
1985: Canada’s Therac-25 radiation therapy malfunctioned due to a software
bug and resulted in lethal radiation doses to patients leaving 3 injured and 3
people dead.
1994: China Airlines Airbus A300 crashed due to a software bug killing 264
people.
1996: A software bug caused U.S. bank accounts of 823 customers to be
credited with 920 million US dollars.
1999: A software bug caused the failure of a $1.2 billion military satellite
launch.
2015: A software bug in fighter plan F-35 resulted in making it unable to
detect targets correctly.
2015: Bloomberg terminal in London crashed due to a software bug affecting
300,000 traders on the financial market and forcing the government to
postpone the 3bn pound debt sale.
Starbucks was forced to close more than 60% of its outlet in the U.S. and
Canada due to a software failure in its POS system.
Nissan cars were forced to recall 1 million cars from the market due to a
software failure in the car’s airbag sensory detectors.
Different Types Of Software Testing
Explore diverse software testing methods including manual and automated testing
for improved quality assurance . Enhance software reliability and performance
through functional and non-functional testing, ensuring user satisfaction. Learn about
the significance of various testing approaches for robust software development.
Types Of Software Testing
Apart from the above classification software testing can be further divided into 2
more ways of testing:
1. Manual testing : It includes testing software manually, i.e., without using any
automation tool or script. In this type, the tester takes over the role of an end-
user and tests the software to identify any unexpected behavior or bug. There
are different stages for manual testing such as unit testing, integration testing,
system testing, and user acceptance testing. Testers use test plans, test
cases, or test scenarios to test software to ensure the completeness of
testing. Manual testing also includes exploratory testing, as testers explore
the software to identify errors in it.
2. Automation testing : It is also known as Test Automation, is when the tester
writes scripts and uses another software to test the product. This process
involves the automation of a manual process. Automation Testing is used to
re-run the test scenarios quickly and repeatedly, that were performed
manually in manual testing.
Apart from Regression testing , Automation testing is also used to test the
application from a load, performance, and stress point of view. It increases the test
coverage, improves accuracy, and saves time and money when compared to manual
testing.
Different Types of Software Testing Techniques
Internal workings of an
Knowledge of the internal
1 application are not
workings is a must.
required.
S No. Black Box Testing White Box Testing
Software testing ensures that software works properly, meets user needs, and is free
of problems. It helps find and fix issues early, making sure the final product is reliable
and meets quality standards. By testing regularly and involving users, software
teams can make better products that save time and money.
Principles of software testing – Software Testing
Software testing is an important aspect of software development, ensuring that
applications function correctly and meet user expectations.
In this article, we will go into the principles of software testing, exploring key
concepts and methodologies to enhance product quality. From test planning to
execution and analysis, understanding these principles is vital for delivering robust
and reliable software solutions.
Principles of Software Testing
3. Early testing
4. Defect clustering
5. Pesticide paradox
6. Testing is Context-Dependent
The goal of software testing is to make the software fail. Software testing reduces
the presence of defects. Software testing talks about the presence of defects and
doesn’t talk about the absence of defects. Software testing can ensure that defects
are present but it can not prove that software is defect-free. Even multiple tests can
never ensure that software is 100% bug-free. Testing can reduce the number of
defects but not remove all defects.
2. Exhaustive Testing is not Possible
It is the process of testing the functionality of the software in all possible inputs (valid
or invalid) and pre-conditions is known as exhaustive testing. Exhaustive testing is
impossible means the software can never test at every test case. It can test only
some test cases and assume that the software is correct and it will produce the
correct output in every test case. If the software will test every test case then it will
take more cost, effort, etc., which is impractical.
3. Early Testing
To find the defect in the software, early test activity shall be started. The defect
detected in the early phases of SDLC will be very less expensive. For better
performance of software, software testing will start at the initial phase i.e. testing will
perform at the requirement analysis phase.
4. Defect Clustering
In a project, a small number of modules can contain most of the defects. The Pareto
Principle for software testing states that 80% of software defects come from 20% of
modules.
5. Pesticide Paradox
Repeating the same test cases, again and again, will not find new bugs. So it is
necessary to review the test cases and add or update test cases to find new bugs.
6. Testing is Context-Dependent
The testing approach depends on the context of the software developed. Different
types of software need to perform different types of testing. For example, The testing
of the e-commerce site is different from the testing of the Android application.
7. Absence of Errors Fallacy
If a built software is 99% bug-free but does not follow the user requirement then it is
unusable. It is not only necessary that software is 99% bug-free but it is also
mandatory to fulfill all the customer requirements.
Types of Software Testing
1. Unit Testing
2. Integration Testing
3. Regression Testing
4. Smoke Testing
5. System Testing
6. Alpha Testing
7. Beta Testing
8. Performance Testing
1. Unit Testing
Unit tests are typically written by developers as they write the code for a given unit.
They are usually written in the same programming language as the software and use
a testing framework or library that provides the necessary tools for creating and
running the tests. These frameworks often include assertion libraries, which allow
developers to write test cases that check the output of a given unit against expected
results. The tests are usually run automatically and continuously as part of the
software build process, and the results are typically displayed in a test runner or a
continuous integration tool.
Unit Testing has several benefits, including:
Integration testing is typically performed after unit testing and before system testing.
It is usually done by developers and test engineers, and it is usually carried out at
the module level. Integration tests are typically automated and run frequently, as part
of the software build process, to ensure that the software remains stable and free of
defects over time.
Integration Testing has several benefits, including:
Regression testing is typically performed after unit testing and integration testing. It is
usually done by developers and test engineers and it is usually carried out by re-
running a suite of previously passed test cases. The test cases are chosen to cover
the areas of the software that were affected by the changes and to ensure that the
most critical functionality of the software is still working correctly. Regression testing
is typically automated and run frequently, as part of the software build process, to
ensure that the software remains stable and free of defects over time.
Regression Testing has several benefits, including:
Early detection and isolation of defects, can save time and money by allowing
developers to fix errors before they become more costly to fix.
Smoke testing is typically performed early in the software testing process, after the
software has been built and before more extensive testing is done. It is usually done
by developers and test engineers and it is usually carried out by running a small set
of critical test cases that exercise the most important functionality of the software.
Smoke tests are usually automated and can be run as part of the software build
process.
Smoke Testing has several benefits, including:
Early identification of major issues, can save time and money by allowing
developers to fix errors before they become more costly to fix.
Improved software quality and reliability, as smoke testing helps to ensure that
the software is stable enough to proceed with further testing.
Facilitation of continuous integration and delivery, as smoke testing helps to
ensure that new builds of the software are stable and reliable before they are
released.
System testing is typically performed after unit testing, integration testing, and
regression testing. It is usually done by test engineers and it is usually carried out by
running a set of test cases that cover all the functionality of the software. The test
cases are chosen to cover the requirements and specifications of the software and to
ensure that the software behaves correctly under different conditions and scenarios.
System testing is typically automated and run frequently, as part of the software build
process, to ensure that the software remains stable and free of defects over time.
System Testing has several benefits, including:
Early detection and isolation of defects, which can save time and money by
allowing developers to fix errors before they become more costly to fix.
It helps to ensure that the software meets all the requirements and
specifications that it was designed for, providing increased confidence in the
software to the development team and end-users.
Conclusion
Software testing is essential for ensuring applications meet user expectations and
function correctly. Understanding key principles like detecting defects early and
recognizing the impossibility of exhaustive testing is vital for delivering reliable
software.
Various types of testing, including unit, integration, regression, smoke, and system
testing, offer unique benefits like early bug detection and improved code quality. By
embracing these principles and employing diverse testing methods, developers can
enhance product quality and user satisfaction.
Levels of Software Testing
While performing the software testing, following Testing principles must be applied by
every software engineer:
2. Planning of tests that how tests will be conducted should be done long before
the beginning of the test.
3. The Pareto principle can be applied to software testing- 80% of all errors
identified during testing will likely be traceable to 20% of all program modules.
4. Testing should begin “in the small” and progress toward testing “in the large”.
5. Exhaustive testing which simply means to test all the possible combinations of
data is not possible.
Functional Testing is a type of Software Testing in which the system is tested against
the functional requirements and specifications. Functional testing ensures that the
application properly satisfies the requirements or specifications. This type of
testing is particularly concerned with the result of processing. It focuses on the
simulation of actual system usage but does not develop any system structure
assumptions.
What is Functional Testing?
Functional testing is defined as a type of testing that verifies that each function of
the software application works in conformance with the requirement and
specification. This testing is not concerned with the source code of the application.
Each functionality of the software application is tested by providing appropriate test
input, expecting the output, and comparing the actual output with the expected
output. This testing focuses on checking the user
interface, APIs , database , security , client or server application , and functionality of
the Application Under Test. Functional testing can be manual or automated .
Just as functional testing is crucial for ensuring that a software application meets its
specified requirements, having a solid understanding of how to effectively carry out
this type of testing is essential for any software tester. If you’re looking to deepen
your expertise in functional testing and other key areas of software testing, consider
exploring the Complete Guide to Software Testing & Automation by
GeeksforGeeks . This course offers in-depth knowledge on testing methodologies,
including both manual and automated testing, helping you ensure that every function
of your application works flawlessly and meets the needs of your users
Purpose of Functional Testing
Functional testing mainly involves black box testing and can be done manually or
using automation. The purpose of functional testing is to:
Test each function of the application: Functional testing tests each function
of the application by providing the appropriate input and verifying the output
against the functional requirements of the application.
Test primary entry function: In functional testing, the tester tests each entry
function of the application to check all the entry and exit points.
Test flow of the GUI screen: In functional testing, the flow of the GUI screen
is checked so that the user can navigate throughout the application.
What to Test in Functional Testing?
The goal of functional testing is to check the functionalities of the application under
test. It concentrates on:
Basic Usability: Functional testing involves basic usability testing to check
whether the user can freely navigate through the screens without any
difficulty.
Mainline functions: This involves testing the main features and functions of
the application.
Accessibility: This involves testing the accessibility of the system for the
user.
Error Conditions: Functional testing involves checking whether the
appropriate error messages are being displayed or not in case of error
conditions.
Functional Testing Process
Below are the differences between functional testing and non-functional testing:
The objective is to
The objective is to
Objective performance of the
validate software actions.
software system
Non-functional testing is
Functional testing is
carried out using the
Requirements carried out using the
performance
functional specification.
specifications.
It provides a single interface that lets the tester write test scripts in languages
like Ruby , Java , NodeJS , etc.
It provides a playback tool for authoring functional tests across most modern
web browsers.
2. QTP: The QTP tool now can UFT is a tool designed to perform
automated functional testing without the need to monitor the system in intervals.
It can be used along with the Selenium WebDriver to automate tests for web
applications.
It lets to dynamically analyze how well SOAP and REST service contract is
covered by the functional tests.
5. Cucumber: Cucumber is an open-source testing tool written in Ruby language.
This tool allows for easy reuse of code in tests due to the style of writing the
tests.
Best Practices for Functional Testing
Missed critical errors: There are chances while executing functional tests
that critical and logical errors are missed.
Redundant testing: There are high chances of performing redundant testing.
Structural testing is a type of software testing that uses the internal design of the
software for testing or in other words the software testing which is performed by the
team which knows the development phase of the software, is known as structural
testing.
Structural testing is related to the internal design and implementation of the software
i.e. it involves the development team members in the testing team. It tests different
aspects of the software according to its types. Structural testing is just the opposite
of behavioral testing.
Types of Structural Testing
It uses the control flow graph to explore the unreasonable things that can happen to
data. The detection of data flow anomalies are based on the associations between
values and variables. Without being initialized usage of variables. Initialized variables
are not used once.
Slice Based Testing:
It was originally proposed by Weiser and Gallagher for the software maintenance. It
is useful for software debugging, software maintenance, program understanding and
quantification of functional cohesion. It divides the program into different slices and
tests that slice which can majorly affect the entire software.
Mutation Testing:
Sometimes it is expensive.
Structural Testing Tools
JBehave
Cucumber
Junit
Cfix
Test plan – Software Testing
What is a Test Plan? A test plan is a document that consists of all future testing-
related activities. It is prepared at the project level and in general, it defines work
products to be tested, how they will be tested, and test type distribution among the
testers. Before starting testing there will be a test manager who will be preparing a
test plan. In any company whenever a new project is taken up before the tester is
involved in the testing the test manager of the team would prepare a test Plan.
The test plan serves as the blueprint that changes according to the
progressions in the project and stays current at all times.
The following are some of the key benefits of making a test plan:
Defines Objectives: A test plan clearly outlines the testing objectives and the
scope of testing activities, ensuring that all team members understand what
needs to be achieved.
Structured Approach : It provides a systematic approach to testing, detailing
the steps and processes involved, which helps in organizing the testing effort.
Avoids Scope Creep : By defining what will and will not be tested, the test
plan helps manage the scope of testing activities, preventing unnecessary
work and focusing on irrelevant areas.
Resource Allocation : Helps in identifying the necessary resources, including
personnel, tools, and environments, ensuring they are available when
needed.
Identifies Risks : A test plan identifies potential risks and outlines mitigation
strategies, helping to address issues proactively rather than reactively.
Contingency Plans : These include contingency plans for dealing with
unexpected events or issues that may arise during testing.
Stakeholder Alignment : Facilitates communication among stakeholders,
including developers, testers, project managers, and clients, ensuring
everyone is aligned on the testing objectives, approach, and schedule.
Documentation : Serves as a comprehensive document that can be referred
to by all team members, aiding in knowledge sharing and transparency.
Resource Optimization : Helps in efficiently utilizing available resources,
including time and personnel, by providing a clear plan of action.
Focus on Priorities : Ensures that testing efforts are focused on high-priority
areas that are critical to the success of the project.
Objectives of the Test Plan:
There is no hard and fast rule for preparing a test plan but it has some standard 15
attributes that companies follow:
Example:
The testing team will get proper support from the development team.
The tester will get proper knowledge transfer from the development team.
Lack of cooperation.
7. Mitigation Plan: If any risk is involved then the company must have a backup
plan, the purpose is to avoid errors. Some points to resolve/avoid risk:
Test Manager: Manages the project, takes appropriate resources, and gives
project direction.
Tester: Identify the testing technique, verify the test approach, and save
project costs.
9. Schedule: Under this, it will record the start and end date of every testing-related
activity. For Example, writing the test case date and ending the test case date.
10. Defect Tracking: It is an important process in software engineering as lots of
issue arises when you develop a critical system for business. If there is any defect
found while testing that defect must be given to the developer team. There are the
following methods for the process of defect tracking:
Information Capture: In this, we take basic information to begin the process.
Prioritize: The task is prioritized based on severity and importance.
Example: The bug can be identified using bug-tracking tools such as Jira, Mantis,
and Trac.
11. Test Environments: It is the environment that the testing team will use i.e. the
list of hardware and software, while testing the application, the things that are said to
be tested will be written under this section. The installation of software is also
checked under this.
Example:
Software configuration on different operating systems, such as Windows,
Linux, Mac, etc.
Hardware Configuration depends on RAM, ROM, etc.
12. Entry and Exit Criteria: The set of conditions that should be met to start any
new type of testing or to end any kind of testing.
Entry Condition:
o Test scripts.
o Test data.
o Error logs.
After the testing phase :
o Test Reports.
o Defect Report.
o Installation Report.
It contains a test plan, defect report, automation report, assumption report, tools, and
other components that have been used for developing and maintaining the testing
effort.
16. Template: This is followed by every kind of report that is going to be prepared by
the testing team. All the test engineers will only use these templates in the project to
maintain the consistency of the product.
Types of Test Plans:
Master Test Plan: In this type of test plan, includes multiple test strategies and
has multiple levels of testing. It goes into great depth on the planning and
management of testing at the various test levels and thus provides a bird’s
eye view of the important decisions made, tactics used, etc. It includes a list of
tests that must be executed, test coverage, the connection between various
test levels, etc.
Phase Test Plan: In this type of test plan, emphasis is on any one phase of
testing. It includes further information on the levels listed in the master testing
plan. Information like testing schedules, benchmarks, activities, templates,
and other information that is not included in the master test plan is included in
the phase test plan.
Specific Test Plan: This type of test plan, is designed for specific types of
testing especially non-functional testing for example plans for conducting
performance tests or security tests.
How to create a Test Plan :
Below are the eight steps that can be followed to write a test plan:
Scope of testing which means the components that will be tested and the
ones that will be skipped.
Type of testing which means different types of tests that will be used in the
project.
Risks and issues that will list all the possible risks that may occur during
testing.
Test logistics mentions the names of the testers and the tests that will be run
by them.
3. Define test objectives: This phase defines the objectives and expected results of
the test execution. Objectives include:
The ideal expected outcome for every aspect of the software that needs
testing.
4. Define test criteria: Two main testing criteria determine all the activities in the
testing project:
Suspension criteria: Suspension criteria define the benchmarks for
suspending all the tests.
Exit criteria: Exit criteria define the benchmarks that signify the successful
completion of the test phase or project. These are expected results and must
match before moving to the next stage of development.
5. Resource planning: This phase aims to create a detailed list of all the resources
required for project completion. For example, human effort, hardware and software
requirements, all infrastructure needed, etc.
6. Plan test environment: This phase is very important as the test environment is
where the QAs run their tests. The test environments must be real devices, installed
with real browsers and operating systems so that testers can monitor software
behavior in real user conditions.
7. Schedule and Estimation: Break down the project into smaller tasks and allocate
time and effort for each task. This helps in efficient time estimation. Create a
schedule to complete these tasks in the designated time with a specific amount of
effort.
8. Determine test deliverables: Test deliverables refer to the list of documents,
tools, and other equipment that must be created, provided, and maintained to
support testing activities in the project.
Best Practices for Creating an effective Test Plan:
A test plan is a crucial document in the software testing lifecycle that provides a
structured approach to validating and verifying the quality of a software product. It
outlines the objectives, scope, resources, and methodologies for testing, ensuring
that all aspects of the application are thoroughly assessed. By following best
practices in test plan creation, such as understanding project requirements, defining
clear objectives, and establishing a robust test environment, teams can effectively
manage testing efforts and enhance the overall quality of the software. A well-crafted
test plan not only aligns the team on testing goals but also helps in optimizing
resources, mitigating risks, and ensuring stakeholder satisfaction.
Version: Shows the version of the test case especially when the test case is
being developed in cycles or when the test case is being updated.
Author: The name of the person who has developed the test case
specification.
Date: The date on which the test case specification was prepared or updated
last.
Components of a Test Specification
Title: A short name of the test case that briefly describes it.
Objective: What is being tested and why, the purpose of the test case.
Prerequisites: Any prerequisite that is needed before running the test case,
including data precondition or environment precondition.
Test Steps: A description of the actions that are to be performed in order to
execute the test case in a very detailed and in a step by step manner.
Test Data: The test input data that are necessary for the test, the specific
values and conditions to be used.
Expected Results: The possible results of the test such as certain outputs or
alterations of the system.
Actual Results: The results obtained in the course of testing and which are
used to define whether the test case is successful or not.
Pass/Fail Criteria: The criteria used in the evaluation of the test case in
relation to the comparison of the expected and actual outcomes.
Comments: Any other comments that can be made about the test case, for
example, problems faced during the test case or recommendations for future
test cases.
Types of Test Case Specifications
Test case specifications can be categorized into different types based on their
purpose and scope:
Functional Test Cases: Concentrate on the functional testing and make sure
that all the functions of the software are working correctly.
Non-Functional Test Cases: Tackle the non-technical aspects of the
software like performance, security, usability, and compatibility.
Regression Test Cases: Designed to ensure that new functions do not
negatively impact the performance of existing functionalities in the software.
Integration Test Cases: The primary emphasis is placed on checking the
interaction of some of the components or modules of the software.
User Acceptance Test Cases: Ensure that the software satisfies the end-
users and their expectations towards the software.
Process of Writing Test Specifications
Verify that users can reset their password using the ‘Forgot
Objective Password’ feature.
The user receives a password reset email, follows the link, enters
Expected a new password, and receives confirmation that the password
Results has been successfully changed.
Actual
[Leave blank for test execution]
Results
Consistency: Test case specifications are important since they make testing
consistent whether it is manual or automated.
Efficiency: It is easier and faster to create automated test scripts when there
are clear specifications provided hence taking less time to create test scripts.
Reusability: Test case specifications are useful for future automation where
developers are able to create reusable automated test scripts that can be
used in other projects or in other contexts.
Traceability: Requirements traceability is made possible by specifications as
it links the automated test scripts to the requirements to confirm that all
functionalities have been tested.
Maintenance: Automated test scripts can be maintained by using test case
specifications so that the modified test cases can be easily identified.
Conclusion
Reliability testing is a Type of software testing that evaluates the ability of a system
to perform its intended function consistently and without failure over an extended
period.
1. Reliability testing aims to identify and address issues that can cause the
system to fail or become unavailable.
3. It ensures that the product is fault-free and is reliable for its intended purpose.
5. It can also help to identify issues that may not be immediately apparent during
functional testing, such as memory leaks or other performance issues.
Improvement uses the insights gained from modelling and measurement to enhance
the reliability of a product or system. This involves identifying weak points,
redesigning components, or changing manufacturing processes to make the product
more reliable.
Example: After finding that a particular part in a washing machine fails frequently,
engineers might redesign that part or choose a more durable material to improve its
lifespan.
Different Ways to Perform Reliability Testing
Here are the Different Ways to Perform Reliability Testing are follows:
1. Stress testing: Stress testing involves subjecting the system to high levels of
load or usage to identify performance bottlenecks or issues that can cause the
system to fail
2. Endurance testing: Endurance testing involves running the system
continuously for an extended period to identify issues that may occur over
time
3. Recovery testing: Recovery testing is testing the system’s ability to recover
from failures or crashes.
4. Environmental Testing: Conducting tests on the product or system in various
environmental settings, such as temperature shifts, humidity levels, vibration
exposure or shock exposure, helps in evaluating its dependability in real-world
circumstances.
5. Performance Testing: In Performance Testing It is possible to make sure that
the system continuously satisfies the necessary specifications and
performance criteria by assessing its performance at both peak and normal
load levels.
6. Regression Testing: In Regression Testing After every update or
modification, the system should be tested again using the same set of test
cases to help find any potential problems caused by code changes.
7. Fault Tree Analysis: Understanding the elements that lead to system failures
can be achieved by identifying probable failure modes and examining the
connections between them.
It is important to note that reliability testing may require specialized tools and test
environments, and that it’s often a costly and time-consuming process.
Objective of Reliability Testing
Load testing is carried out to determine whether the application is supporting the
required load without getting breakdown. It is performed to check the performance of
the software under maximum work load.
4. Stress Testing
This type of testing involves subjecting the system to high levels of usage or load in
order to identify performance bottlenecks or issues that can cause the system to fail.
5. Endurance Testing
This type of testing involves running the system continuously for an extended period
of time in order to identify issues that may occur over time, such as memory leaks or
other performance issues.
Recovery testing: This type of testing involves testing the system’s ability to recover
from failures or crashes, and to return to normal operation.
6. Volume Testing
Volume Testing is a type of testing involves testing the system’s ability to handle
large amounts of data. This type of testing is similar to endurance testing, but it
focuses on the stability of the system under a normal, expected load over a long
period of time.
7. Spike Testing
This type of testing involves subjecting the system to sudden, unexpected increases
in load or usage in order to identify performance bottlenecks or issues that can
cause the system to fail.
Measurement of Reliability Testing
Mean Time Between Failures (MTBF): Measurement of reliability testing is done in
terms of mean time between failures (MTBF).
Mean Time To Failure (MTTF): The time between two consecutive failures is called
as mean time to failure (MTTF).
Mean Time To Repair (MTTR): The time taken to fix the failures is known as mean
time to repair (MTTR).
Reliability testing is crucial for ensuring software quality and user satisfaction. It
encompasses various techniques including stress testing, endurance testing,
and performance testing to evaluate a system’s ability to function consistently over
time. The key objectives are to identify failure patterns, assess system stability,
and improve overall product dependability.
Software Testing is a type of investigation to find out if there are any defects or errors
present in the software, so that the errors can be reduced or removed to increase the
quality of the software and to check whether it fulfills the specified requirements or
not.
According to Glen Myers, software testing has the following objectives:
When the number of errors found during the testing is high, it indicates that
the testing was good and is a sign of good test case.
1. Improves software quality and reliability – Testing helps to identify and fix
defects early in the development process, reducing the risk of failure or
unexpected behavior in the final product.
2. Enhances user experience – Testing helps to identify usability issues and
improve the overall user experience.
5. Reduces costs – Finding and fixing defects early in the development process
is less expensive than fixing them later in the life cycle.
Disadvantages of software testing:
3. Limited coverage – Testing can only reveal defects that are present in the test
cases, and it is possible for defects to be missed.
5. Delays in delivery – Testing can delay the delivery of the software if testing
takes longer than expected or if significant defects are identified.
Unit – V Software Project Management
For properly building a product, there’s a very important concept that we all should
know in software project planning while developing a product. There are 4 critical
components in software project planning which are known as the 4P’s namely:
Product
Process
People
Project
These components play a very important role in your project that can help your team
meet its goals and objective. Now, Let’s dive into each of them a little in detail to get
a better understanding:
People
The most important component of a product and its successful
implementation is human resources. In building a proper product, a well-
managed team with clear-cut roles defined for each person/team will lead to
the success of the product. We need to have a good team in order to save our
time, cost, and effort. Some assigned roles in software project planning
are project manager, team leaders, stakeholders, analysts, and other IT
professionals. Managing people successfully is a tricky process which a
good project manager can do.
Product
As the name inferred, this is the deliverable or the result of the project. The
project manager should clearly define the product scope to ensure a
successful result, control the team members, as well technical hurdles that he
or she may encounter during the building of a product. The product can
consist of both tangible or intangible such as shifting the company to a new
place or getting a new software in a company.
Process
In every planning, a clearly defined process is the key to the success of any
product. It regulates how the team will go about its development in the
respective time period. The Process has several steps involved like,
documentation phase, implementation phase, deployment phase, and
interaction phase.
Project
The last and final P in software project planning is Project. It can also be
considered as a blueprint of process. In this phase, the project manager plays
a critical role. They are responsible to guide the team members to achieve the
project’s target and objectives, helping & assisting them with issues, checking
on cost and budget, and making sure that the project stays on track with the
given deadlines.
Cost Estimation Models in Software Engineering
Cost estimation simply means a technique that is used to find out the cost
estimates. The cost estimate is the financial spend that is done on the efforts to
develop and test software in Software Engineering. Cost estimation models are
some mathematical algorithms or parametric equations that are used to estimate the
cost of a product or a project. Various techniques or models are available for cost
estimation, also known as Cost Estimation Models.
Cost Estimation Models as shown below :
Cost
Estimation Models
1. Empirical Estimation Technique – Empirical estimation is a technique or
model in which empirically derived formulas are used for predicting the data
that are a required and essential part of the software project planning step.
These techniques are usually based on the data that is collected previously
from a project and also based on some guesses, prior experience with the
development of similar types of projects, and assumptions. It uses the size of
the software to estimate the effort. In this technique, an educated guess of
project parameters is made. Hence, these models are based on common
sense. However, as there are many activities involved in empirical estimation
techniques, this technique is formalized. For example Delphi technique and
Expert Judgement technique.
2. Heuristic Technique – Heuristic word is derived from a Greek word that
means “to discover”. The heuristic technique is a technique or model that is
used for solving problems, learning, or discovery in the practical methods
which are used for achieving immediate goals. These techniques are flexible
and simple for taking quick decisions through shortcuts and good enough
calculations, most probably when working with complex data. But the
decisions that are made using this technique are necessary to be optimal. In
this technique, the relationship among different project parameters is
expressed using mathematical equations. The popular heuristic technique is
given by Constructive Cost Model (COCOMO). This technique is also used to
increase or speed up the analysis and investment decisions.
3. Analytical Estimation Technique – Analytical estimation is a type of
technique that is used to measure work. In this technique, firstly the task is
divided or broken down into its basic component operations or elements for
analyzing. Second, if the standard time is available from some other source,
then these sources are applied to each element or component of work. Third,
if there is no such time available, then the work is estimated based on the
experience of the work. In this technique, results are derived by making
certain basic assumptions about the project. Hence, the analytical estimation
technique has some scientific basis. Halstead’s software science is based on
an analytical estimation model.
Short note on Project Scheduling
A schedule in your project’s time table actually consists of sequenced activities and
milestones that are needed to be delivered under a given period of time.
Project schedule simply means a mechanism that is used to communicate and
know about that tasks are needed and has to be done or performed and which
organizational resources will be given or allocated to these tasks and in what time
duration or time frame work is needed to be performed. Effective project scheduling
leads to success of project, reduced cost, and increased customer satisfaction.
Scheduling in project management means to list out activities, deliverables, and
milestones within a project that are delivered. It contains more notes than your
average weekly planner notes. The most common and important form of project
schedule is Gantt chart.
Process :
The manager needs to estimate time and resources of project while scheduling
project. All activities in project must be arranged in a coherent sequence that means
activities should be arranged in a logical and well-organized manner for easy to
understand. Initial estimates of project can be made optimistically which means
estimates can be made when all favorable things will happen and no threats or
problems take place.
The total work is separated or divided into various small activities or tasks during
project schedule. Then, Project manager will decide time required for each activity or
task to get completed. Even some activities are conducted and performed in parallel
for efficient performance. The project manager should be aware of fact that each
stage of project is not problem-free.
Problems arise during Project Development Stage :
Human effort
Specialized hardware
Software technology
It simply ensures that everyone remains on same page as far as tasks get
completed, dependencies, and deadlines.
The process of staffing consists of several interrelated activities, such as planning for
human resources requirements, recruitment, selection, training development,
remuneration, and so on. These activities together make the staffing process.
Therefore, these are called elements or steps of the staffing process.
1. Manpower Planning
After estimating manpower requirements, the second step in the process of staffing
is recruitment. Recruitment refers to a process of searching for prospective
employees and encouraging them to apply for jobs in the organization. It involves
identifying various resources of human force and attracting them to apply for the job.
The main purpose of a requirement is to create a pool of applicants by a large
number of qualified candidates. Recruitment can be done by both internal and
external sources of recruitment. Internal sources may be used to a limited extent,
and to get fresh talent and a wider choice, external sources can be used.
3. Selection
Selection is the process of choosing and appointing the right candidates for various
job positions in the organization. It is treated as a negative process because it
involves the rejection of some candidates. There are many steps involved in the
process of employee selection. These steps include preliminary screening, filling-in
application, written test, interviews, medical examination, checking references, and
issuing a letter of appointment to the candidates. The most suitable candidates who
meet the requirement of the vacant job are selected. The process of selection serves
two important purposes, firstly, it ensures that the organization gets the best among
the available candidates, and secondly, it boosts ups the self-esteem and prestige of
the candidates.
4. Placement and Orientation
People are in search of careers and not jobs. Every individual must be given a
chance to rise to the top. The most favourable way for this to happen is to promote
employee learning. For this, organizations either provide training themselves within
the organization or through external institutions. This is beneficial for the organization
as well. If the employees are motivated enough, it will increase their competence and
will be able to perform even better for the organization with greater efficiency and
productivity. By providing such opportunities to its employees for career
advancement, the organization captivates the interest and holds on of its talented
employees. The majority of the organization has a distinct department for this
purpose, that is, the Human Resource Department. Though in small organizations,
the line manager has to do all the managerial functions viz, planning, organizing,
staffing, controlling, and directing. The process of staffing further involves three more
stages.
6. Performance appraisal
After training the employees and having them on the job for some time, there should
be an evaluation done on their performance. Every organization has its means of
appraisal whether formal or informal. Appraisal refers to the evaluation of the
employees of the organization based on their past or present performance by some
pre-decided standards. The employee should be well aware of his standards and his
superior is responsible for proving feedback on his performance. The process of
performance appraisal, thus includes specifying the job, performing appraisal
performance, and providing feedback.
7. Promotion and Career planning
It has now become important for all organizations to deal with career-related issues
and promotional routes for employees. The managers should take care of the
activities that serve the long-term interests of the employees. They should be
encouraged from time to time, which will help the employees to grow and find their
true potential. Promotions are an essential part of any employee’s career. Promotion
refers to the transferring of employees from their current positions to a higher level
increasing their responsibilities, authority and pay.
8. Compensation
Every organization needs to set up plans for the salary and wages of the employees.
There are several ways to develop payment plans for the employees depending
upon the significance of the job. The worth of the job needs to be decided. Therefore,
all kinds of payments or rewards provided to the employees is referred to as
compensation. The compensation may be in the form of direct financial payments,
such as salary, wages, bonuses, etc., or indirect payments like insurance or
vacations provided to the employee.
Direct financial payments are of two kinds, that is, performance-based and time-
based. In a time-based payment plan, the salary or wages are paid daily, weekly,
monthly, or annually, whereas, the performance-based payment plan is the payment
of salary or wages according to the set task. There are many ways in which the
compensation of the employee based on their performance can be calculated. There
are also plans, which are a combination of both time-based and performance-based.
There are a few factors that affect the payment plan, such as legal, company policy,
union, and equity. Thus, staffing is the process that includes possession, retention,
promotion, and compensation of the human capital, that is, the most important
resource of the organization. There are several factors such as the supply and
demand of specific skills in the labour market, legal and political considerations, the
company’s image, policy, unemployment rate, human resource planning cost, labour
market conditions, technological developments, general economic environment, etc.,
that may affect the execution of recruitment, selection, and training.
Aspects or Components of Staffing
Whenever software is built, there is always scope for improvement and those
improvements bring picture changes. Changes may be required to modify or update
any existing solution or to create a new solution for a problem. Requirements keep
on changing daily so we need to keep on upgrading our systems based on the
current requirements and needs to meet desired outputs. Changes should be
analyzed before they are made to the existing system, recorded before they are
implemented, reported to have details of before and after, and controlled in a manner
that will improve quality and reduce error. This is where the need for System
Configuration Management comes. System Configuration Management (SCM) is
an arrangement of exercises that controls change by recognizing the items for
change, setting up connections between those things, making/characterizing
instruments for overseeing diverse variants, controlling the changes being executed
in the current framework, inspecting and revealing/reporting on the changes made. It
is essential to control the changes because if the changes are not checked
legitimately then they may wind up undermining a well-run programming. In this way,
SCM is a fundamental piece of all project management activities.
Processes involved in SCM – Configuration management provides a disciplined
environment for smooth control of work products. It involves the following activities:
1. Identification and Establishment – Identifying the configuration items from
products that compose baselines at given points in time (a baseline is a set of
mutually consistent Configuration Items, which has been formally reviewed
and agreed upon, and serves as the basis of further development).
Establishing relationships among items, creating a mechanism to manage
multiple levels of control and procedure for the change management system.
2. Version control – Creating versions/specifications of the existing product to
build new products with the help of the SCM system. A description of the
version is given below:
Suppose after some changes, the version of the configuration object changes
from 1.0 to 1.1. Minor corrections and changes result in versions 1.1.1 and
1.1.2, which is followed by a major update that is object 1.2. The development
of object 1.0 continues through 1.3 and 1.4, but finally, a noteworthy change
to the object results in a new evolutionary path, version 2.0. Both versions are
currently supported.
3. Change control – Controlling changes to Configuration items (CI). The
change control process is explained in Figure below:
A change request (CR) is submitted and evaluated to assess technical merit,
potential side effects, the overall impact on other configuration objects and
system functions, and the projected cost of the change. The results of the
evaluation are presented as a change report, which is used by a change
control board (CCB) —a person or group who makes a final decision on the
status and priority of the change. An engineering change Request (ECR) is
generated for each approved change. Also, CCB notifies the developer in
case the change is rejected with proper reason. The ECR describes the
change to be made, the constraints that must be respected, and the criteria
for review and audit. The object to be changed is “checked out” of the project
database, the change is made, and then the object is tested again. The object
is then “checked in” to the database and appropriate version control
mechanisms are used to create the next version of the software.
4. Configuration auditing – A software configuration audit complements the
formal technical review of the process and product. It focuses on the technical
correctness of the configuration object that has been modified. The audit
confirms the completeness, correctness, and consistency of items in the SCM
system and tracks action items from the audit to closure.
5. Reporting – Providing accurate status and current configuration data to
developers, testers, end users, customers, and stakeholders through admin
guides, user guides, FAQs, Release notes, Memos, Installation Guide,
Configuration guides, etc.
System Configuration Management (SCM) is a software engineering practice that
focuses on managing the configuration of software systems and ensuring that
software components are properly controlled, tracked, and stored. It is a critical
aspect of software development, as it helps to ensure that changes made to a
software system are properly coordinated and that the system is always in a known
and stable state.
SCM involves a set of processes and tools that help to manage the different
components of a software system, including source code, documentation, and other
assets. It enables teams to track changes made to the software system, identify
when and why changes were made, and manage the integration of these changes
into the final product.
Importance of Software Configuration Management
1. Effective Bug Tracking: Linking code modifications to issues that have been
reported, makes bug tracking more effective.
4. Support for Big Projects: Source Code Control (SCM) offers an orderly
method to handle code modifications for big projects, fostering a well-
organized development process.
1. Improved productivity and efficiency by reducing the time and effort required
to manage software changes.
2. Reduced risk of errors and defects by ensuring that all changes were properly
tested and validated.
3. Potential for conflicts and delays, particularly in large development teams with
multiple contributors.
Software maintenance is a continuous process that occurs throughout the entire life
cycle of the software system.
This can include fixing bugs, adding new features, improving performance, or
updating the software to work with new hardware or software systems.
It is also important to consider the cost and effort required for software
maintenance when planning and developing a software system.
It’s important to note that software maintenance can be costly and complex,
especially for large and complex systems. Therefore, the cost and effort of
maintenance should be taken into account during the planning and
development phases of a software project.
It’s also important to have a clear and well-defined maintenance plan that
includes regular maintenance activities, such as testing, backup, and bug
fixing.
Several Key Aspects of Software Maintenance
1. Bug Fixing: The process of finding and fixing errors and problems in the
software.
2. Enhancements: The process of adding new features or improving existing
features to meet the evolving needs of the users.
3. Performance Optimization: The process of improving the speed, efficiency,
and reliability of the software.
4. Porting and Migration: The process of adapting the software to run on new
hardware or software platforms.
5. Re-Engineering: The process of improving the design and architecture of the
software to make it more maintainable and scalable.
6. Documentation: The process of creating, updating, and maintaining the
documentation for the software, including user manuals, technical
specifications, and design documents.
Several Types of Software Maintenance
1. Corrective Maintenance: This involves fixing errors and bugs in the software
system.
2. Patching: It is an emergency fix implemented mainly due to pressure from
management. Patching is done for corrective maintenance but it gives rise to
unforeseen future errors due to lack of proper impact analysis.
3. Adaptive Maintenance: This involves modifying the software system to adapt
it to changes in the environment, such as changes in hardware or software,
government policies, and business rules.
4. Perfective Maintenance: This involves improving functionality, performance,
and reliability, and restructuring the software system to improve changeability.
5. Preventive Maintenance: This involves taking measures to prevent future
problems, such as optimization, updating documentation, reviewing and
testing the system, and implementing preventive measures such as backups.
Correct faults.
Implement enhancements.
Retire software.
The popular age of any software program is taken into consideration up to ten
to fifteen years. As software program renovation is open-ended and might
maintain for decades making it very expensive.
As the era advances, it turns into high prices to preserve vintage software
programs.
Often adjustments made can without problems harm the authentic shape of
the software program, making it difficult for any next adjustments.
There is a lack of Code Comments.
Lack of documentation: Poorly documented systems can make it difficult to
understand how the system works, making it difficult to identify and fix
problems.
Legacy code: Maintaining older systems with outdated technologies can be
difficult, as it may require specialized knowledge and skills.
Complexity: Large and complex systems can be difficult to understand and
modify, making it difficult to identify and fix problems.
Changing requirements: As user requirements change over time, the
software system may need to be modified to meet these new requirements,
which can be difficult and time-consuming.
Interoperability issues: Systems that need to work with other systems or
software can be difficult to maintain, as changes to one system can affect the
other systems.
Lack of test coverage: Systems that have not been thoroughly tested can be
difficult to maintain as it can be hard to identify and fix problems without
knowing how the system behaves in different scenarios.
Lack of personnel: A lack of personnel with the necessary skills and
knowledge to maintain the system can make it difficult to keep the system up-
to-date and running smoothly.
High-Cost: The cost of maintenance can be high, especially for large and
complex systems, which can be difficult to budget for and manage.
Software Reverse Engineering is the process of recovering the design and the
requirements specification of a product from an analysis of its code. Reverse
Engineering is becoming important, since several existing software products, lack
proper documentation, are highly unstructured, or their structure has degraded
through a series of maintenance efforts.
Why Reverse Engineering?
It’s important to note that reverse engineering can be a complex and time-
consuming process, and it is important to have the necessary skills, tools, and
knowledge to perform it effectively. Additionally, it is important to consider the
legal and ethical implications of reverse engineering, as it may be illegal or
restricted in some jurisdictions.
Advantages of Software Maintenance
Risk of introducing new bugs: The process of fixing bugs or adding new
features can introduce new bugs or problems, making it important to
thoroughly test the software after maintenance.
User resistance: Users may resist changes or updates to the software,
leading to decreased satisfaction and adoption.
Compatibility issues: Maintenance can sometimes cause compatibility
issues with other software or hardware, leading to potential integration
problems.
Lack of documentation: Poor documentation or lack of documentation can
make software maintenance more difficult and time-consuming, leading to
potential errors or delays.
Technical debt: Over time, software maintenance can lead to technical debt,
where the cost of maintaining and updating the software becomes
increasingly higher than the cost of developing a new system.
Skill gaps: Maintaining software systems may require specialized skills or
expertise that may not be available within the organization, leading to
potential outsourcing or increased costs.
Inadequate testing: Inadequate testing or incomplete testing after
maintenance can lead to errors, bugs, and potential security vulnerabilities.
End-of-life: Eventually, software systems may reach their end-of-life, making
maintenance and updates no longer feasible or cost-effective. This can lead to
the need for a complete system replacement, which can be costly and time-
consuming.