FinalSoftware Engineering R18
FinalSoftware Engineering R18
The cost and impact of these changes are accessed to see how much system is affected
by the change and how much it might cost to implement the change. If the proposed
changes are accepted, a new release of the software system is planned. During release
planning, all the proposed changes (fault repair, adaptation, and new functionality) are
considered.
A design is then made on which changes to implement in the next version of the
system. The process of change implementation is an iteration of the development
process where the revisions to the system are designed, implemented and tested.
a. Change in requirement with time: With the passes of time, the organization’s
needs and modus Operandi of working could substantially be changed so in this
frequently changing time the tools(software) that they are using need to change for
maximizing the performance.
c. Errors and bugs: As the age of the deployed software within an organization
increases their preciseness or impeccability decrease and the efficiency to bear the
increasing complexity workload also continually degrades. So, in that case, it
becomes necessary to avoid use of obsolete and aged software. All such obsolete
Softwares need to undergo the evolution process in order to become robust as per the
workload complexity of the current environment.
d. Security risks: Using outdated software within an organization may lead you to at
the verge of various software-based cyberattacks and could expose your confidential
data illegally associated with the software that is in use. So, it becomes necessary to
avoid such security breaches through regular assessment of the security
patches/modules are used within the software. If the software isn’t robust enough to
bear the current occurring Cyber attacks so it must be changed (updated).
e. For having new functionality and features: In order to increase the performance
and fast data processing and other functionalities, an organization need to
continuously evolute the software throughout its life cycle so that stakeholders &
clients of the product could work efficiently.
The software is instruction or computer program that when executed provide desired
features, function, and performance. A data structure that enables the program to
adequately manipulate information and document that describe the operation and use
of the program.
Characteristic of software:
There is some characteristic of software which is given below:
1. Functionality
2. Reliability
3. Usability
4. Efficiency
5. Maintainability
6. Portability
5.Product-line Software:
Designed to provide a specific capability for use by many different customers, product
line software can focus on the limited and esoteric marketplace or address the mass
consumer market.
6. Web Application:
It is a client-server computer program which the client runs on the web browser. In
their simplest form, Web apps can be little more than a set of linked hypertext files
that present information using text and limited graphics. However, as e-commerce and
B2B application grow in importance. Web apps are evolving into a sophisticate
computing environment that not only provides a standalone feature, computing
function, and content to the end user.
7. Artificial Intelligence Software:
Artificial intelligence software makes use of a nonnumerical algorithm to solve a
complex problem that is not amenable to computation or straightforward analysis.
Application within this area includes robotics, expert system, pattern recognition,
artificial neural network, theorem proving and game playing.
Myth 1:
We have all the standards and procedures available for software development.
Fact:
● Software experts do not know all the requirements for the software
development.
● And all existing processes are incomplete as new software development is
based on new and different problem.
Myth 2:
The addition of the latest hardware programs will improve the software development.
Fact:
● The role of the latest hardware is not very high on standard software
development; instead (CASE) Engineering tools help the computer, they are
more important than hardware to produce quality and
productivity.
● Hence, the hardware resources are misused.
Myth 3:
Fact:
● If software is late, adding more people will merely make the problem
worse. This is because the people already working on the project now need
to spend time educating the newcomers, and are thus taken away from their
work. The newcomers are also far less productive than the existing software
engineers, and so the work put into training them to work on the software
does not immediately meet with an appropriate reduction in work.
(ii)Customer Myths:
The customer can be the direct users of the software, the technical team, marketing /
sales department, or other company. Customer has myths leading to false expectations
(customer) & that’s why you create dissatisfaction with the developer.
Myth 1:
Fact:
Myth 2:
(iii)Practitioner’s Myths:
Myths 1:
They believe that their work has been completed with the writing of the plan.
Fact:
● It is true that every 60-80% effort goes into the maintenance phase (as of
the latter software release). Efforts are required, where the product is
available first delivered to customers.
Myths 2:
Myth 3:
Fact:
Myth 4:
Engineering software will enable us to build powerful and unnecessary document &
always delay us.
Fact:
The process framework is required for representing common process activities. Five
framework activities are described in a process framework for software engineering.
Communication, planning, modeling, construction, and deployment are all examples
of framework activities. Each engineering action defined by a framework activity
comprises a list of needed work outputs, project milestones, and software quality
assurance (SQA) points.
Umbrella Activities are that take place during a software development process for
improved project management and tracking.
1. Software project tracking and control: This is an activity in which the
team can assess progress and take corrective action to maintain the
schedule. Take action to keep the project on time by comparing the
project’s progress against the plan.
2. Risk management: The risks that may affect project outcomes or quality
can be analyzed. Analyze potential risks that may have an impact on the
software product’s quality and outcome.
products to uncover and remove errors before they propagate to the next
activity. At each level of the process, errors are evaluated and
fixed.
Also, project and product measures are used to assist the software team .
Objectives of CMMI :
Staged Representation :
Uses a pre-defined set of process areas to define improvement path.
Process Patterns:
As the software team moves through the software process they encounter problems. It
would be very useful if solutions to these problems were readily available so that
problems can be resolved quickly. Process-related problems which are encountered
during software engineering work, it identifies the encountered problem and in which
environment it is found, then it will suggest proven solutions to problem, they all are
described by process pattern. By solving problems a software team can construct a
As the software team moves through the software process they encounter problems. It
would be very useful if solutions to these problems were readily available so that
problems can be resolved quickly. Process-related problems which are encountered
during software engineering work, it identifies the encountered problem and in which
environment it is found, then it will suggest proven solutions to problem, they all are
described by process pattern. By solving problems a software team can construct a
process that best meets needs of a project.
Type:
It is of three types :
pattern.
Initial Context: Conditions under which the pattern applies are described by initial
context. Prior to the initiation of the pattern :
Resulting Context: Once the pattern has been successfully implemented, it describes
conditions. Upon completion of pattern :
Known uses and Examples: In which the pattern is applicable, it indicates the
specific instances. For example, communication is mandatory at the beginning of
every software project, is recommended throughout the software project, and is
mandatory once the deployment activity is underway.
Initial Context: Before going to the prototyping these basic conditions should be
made
1. Stakeholder has some idea about their requirements i.e. what they exactly want
Known Uses & Examples: When stakeholder requirements are unclear and
uncertain, prototyping is recommended.
Though an organisation is the assessment objective, even when the same approach is
applied again, the outcomes of a process evaluation may vary. The different results are
mainly due to two reasons. The reasons are that the organization that is being
investigated must be determined. When the company is very large it is possible for the
company to have different definitions due to which the actual scope of appraisal may
be different in successive assessments. Even if it is the same organization the sample
of projects selected to represent the organization may affect the scope and result.
Process maturity is important when the organisation intended to embark on an long
term improvement strategy.
SCAMPI:
SCAMPI stands for Standard CMMI Assessment Method for Process Improvement.
To fulfil the demands of the CMMI paradigm, the Standard
CMMI Assessment Method for Process Improvement (SCAMPI) was created
(Software Engineering Institute, 2000). Moreover, it is based on the CBA IPI. The
CBA IPI and SCAMPI both have three steps.
Personal Software Process (PSP) is the skeleton or the structure that assist the
engineers in finding a way to measure and improve the way of working to a great
extend. It helps them in developing their respective skills at a personal level and the
way of doing planning, estimations against the plans.
Objectives of PSP :
The aim of PSP is to give software engineers with the regulated methods for the
betterment of personal software development processes.
The PSP helps software engineers to:
● Improve their approximating and planning skills.
● Make promises that can be fulfilled.
● Manage the standards of their projects.
● Reduce the number of faults and imperfections in their work.
Time measurement:
Personal Software Process recommend that the developers should structure the way to
spend the time.The developer must measure and count the time they spend on
different activities during the development.
PSP Planning :
The engineers should plan the project before developing because without planning a
high effort may be wasted on unimportant activities which may lead to a poor and
unsatisfactory quality of the result.
2. PSP 1 –
3. PSP 2 –
This level introduces the personal quality management ,design and code
reviews.
4. PSP 3 –
The last level of the Personal Software Process is for the Personal process
evolution.
Process models
The classical waterfall model is the basic software development life cycle model. It is
very simple but idealistic. Earlier this model was very popular but nowadays it is not
used. But it is very important because all the other software development life cycle
models are based on the classical waterfall model.
The waterfall model is useful in situations where the project requirements are well-
defined and the project goals are clear. It is often used for large-scale projects with
long timelines, where there is little room for error and the project stakeholders need to
have a high level of confidence in the outcome.
ensure that the project is well-defined and the project team is working
towards a clear set of goals.
control and testing at each phase of the project, to ensure that the final
product meets the requirements and expectations of the stakeholders.
4. Rigorous Planning: The waterfall model involves a rigorous planning
process, where the project scope, timelines, and deliverables are carefully
defined and monitored throughout the
project lifecycle.
Overall, the waterfall model is used in situations where there is a need for a highly
structured and systematic approach to software development. It can be effective in
ensuring that large, complex projects are completed on time and within budget, with a
high level of quality and customer satisfaction.
2. Design: Once the requirements are understood, the design phase begins.
This involves creating a detailed design document that outlines the software
architecture, user interface, and system components.
based on the design specifications. This phase also includes unit testing to
ensure that each component of the software is working as expected.
maintenance, which involves fixing any issues that arise after the software
has been deployed and ensuring that it continues to meet the requirements
over time.
The classical waterfall model divides the life cycle into a set of phases. This model
considers that one phase can be started after the completion of the previous phase.
That is the output of one phase will be the input to the next phase. Thus the
development process can be considered as a sequential flow in the waterfall. Here the
phases do not overlap with each other. The different sequential phases of the classical
waterfall model are shown in the below
figure.
The main goal of this phase is to determine whether it would be financially and
technically feasible to develop the software.
The feasibility study involves understanding the problem and then determining the
various possible strategies to solve the problem. These different identified solutions
are analyzed based on their benefits and drawbacks, The best solution is chosen and
all the other phases are carried out as per this solution
strategy.
The aim of the requirement analysis and specification phase is to understand the exact
requirements of the customer and document them properly. This
phase consists of two different activities.
● Requirement gathering and analysis: Firstly all the requirements
regarding the software are gathered from the customer and then the
gathered requirements are analyzed. The goal of the analysis part is to
remove incompleteness (an incomplete requirement is one in which some
parts of the actual requirements have been omitted) and inconsistencies (an
inconsistent requirement is one in which some part of the requirement
contradicts some other part).
● Requirement specification: These analyzed requirements are documented
in a software requirement specification (SRS) document. SRS document
serves as a contract between the development team and customers. Any
future dispute between the customers and the developers can be settled by
examining the SRS document.
3. Design
The goal of this phase is to convert the requirements acquired in the SRS into a format
that can be coded in a programming language. It includes high-level and detailed
design as well as the overall software architecture. A Software
Design Document is used to document all of this effort (SDD)
In the coding phase software design is translated into source code using any suitable
programming language. Thus each designed module is coded. The aim of the unit
testing phase is to check whether each module is working
properly or not.
Integration of different modules is undertaken soon after they have been coded and
unit tested. Integration of various modules is carried out incrementally over a number
of steps. During each integration step, previously planned modules are added to the
partially integrated system and the resultant system is tested. Finally, after all the
modules have been successfully integrated and tested, the full working system is
obtained and system testing is carried out on this.
System testing consists of three different kinds of testing activities as described
below.
6. Maintenance
Maintenance is the most important phase of a software life cycle. The effort spent on
maintenance is 60% of the total effort spent to develop a full software. There are
basically three types of maintenance.
First, a simple working system implementing only a few basic features is built and
then that is delivered to the customer. Then thereafter many successive iterations/
versions are implemented and delivered to the customer until the desired system is
released.
A, B, and C are modules of Software Products that are incrementally developed and
delivered.
Once the core features are fully developed, then these are refined to increase levels of
capabilities by adding new functions in Successive versions. Each incremental version
is usually developed using an iterative waterfall model of development.
As each successive version of the software is constructed and delivered, now the
feedback of the Customer is to be taken and these were then incorporated into the next
version. Each version of the software has more additional features than the previous
ones.
After Requirements gathering and specification, requirements are then split into
several different versions starting with version 1, in each successive increment, the
next version is constructed and then deployed at the customer site. After the last
version (version n), it is now deployed at the client site.
● Error Reduction (core modules are used by the customer from the
beginning of the phase and then these are tested thoroughly) ● Uses divide
and conquer for a breakdown of tasks.
● Lowers initial delivery cost.
● Incremental Resource Deployment.
Advantages-
Disadvantages-
Evolutionary model:
Evolutionary model is a combination of Iterative and Incremental model of software
development life cycle. Delivering your system in a big bang release, delivering it in
incremental process over time is the action done in this model. Some initial
requirements and architecture envisioning need to be done. It is better for software
products that have their feature sets redefined during development because of user
feedback and other factors. The Evolutionary development model divides the
development cycle into smaller, incremental waterfall models in which users are able
to get access to the product at the end of each cycle. Feedback is provided by the
users on the product for the planning stage of the next cycle and the development
team responds, often by changing the product, plan or process. Therefore, the
software product evolves with time. All the models have the disadvantage that the
duration of time from start of the project to the delivery time of a solution is very
high.
Evolutionary model solves this problem in a different approach.
Evolutionary model suggests breaking down of work into smaller chunks, prioritizing
them and then delivering those chunks to the customer one by one. The number of
chunks is huge and is the number of deliveries made to the customer. The main
advantage is that the customer’s confidence increases as he constantly gets
quantifiable goods or services from the beginning of the project to verify and validate
his requirements. The model allows for changing requirements as well as all work is
broken down into maintainable work chunks.
1. It is used in large projects where you can easily find modules for
incremental implementation. Evolutionary model is commonly used when
the customer wants to start using the core features instead of waiting for the
full software.
2. Evolutionary model is also used in object oriented software development
because the system can be easily portioned into units in terms of objects.
● Customer needs are clear and been explained in deep to the developer team.
● There might be small changes required in separate parts but not a major
change.
● As it requires time, so there must be some time left for the market
constraints.
● Risk is high and continuous targets to achieve and report to customer
repeatedly.
● It is used when working on a technology is new and requires time to
learn.
Advantages:
Disadvantages:
● Sometimes it is hard to divide the problem into several versions that would
be acceptable to the customer which can be incrementally implemented and
delivered.
UNIT - II
Software Requirements
Functional Requirements: These are the requirements that the end user specifically
demands as basic facilities that the system should offer. It can be a calculation, data
manipulation, business process, user interaction, or any other specific functionality
which defines what function a system is likely to perform. All these functionalities
need to be necessarily incorporated into the system as a part of the contract. These are
represented or stated in the form of input to be given to the system, the operation
performed and the output expected. They are basically the requirements stated by the
user which one can see directly in the final product, unlike the non-functional
requirements. For example, in a hospital management system, a doctor should be able
to retrieve the information of his patients. Each high-level functional requirement may
involve several interactions or dialogues between the system and the outside world. In
order to accurately describe the functional requirements, all scenarios must be
enumerated. There are many ways of expressing functional requirements e.g., natural
language, a structured or formatted language with no rigorous syntax and formal
specification language with proper syntax. Functional Requirements in Software
Engineering are also called Functional Specification.
Non-functional requirements: These are basically the quality constraints that the
system must satisfy according to the project contract.Nonfunctional requirements, not
related to the system functionality, rather define how the system should perform. The
priority or extent to which these factors are implemented varies from one project to
other. They are also called non-behavioral requirements. They basically deal with
issues like:
● Portability
● Security
● Maintainability
● Reliability
● Scalability
● Performance
● Reusability
● Flexibility
● Interface constraints
● Performance constraints: response time, security, storage space,
etc.
● Operating constraints
● Life cycle constraints: maintainability, portability, etc.
● Economic constraints
They are divided into two main categories:Execution qualities like security and
usability, which are observable at run time. Evolution qualities like testability,
maintainability, extensibility, and scalability that embodied in the static structure of
the software system.
1. User requirements :
User requirement simply means needs of users that should be fulfilled by
software system. It is documented in a User Requirement Document
(URD). Overall statements are generally written in natural language plus a
description of the services system provides and its operational constraints.
User requirement is good if it is clear and short, results in increasing overall
quality, increases productivity, is traceable, etc.
2. System Requirements :
System requirement simply means needs of system to run smoothly and
efficiently. It is a structured document that gives a detailed description of
system functions, services, and operational constraints. It requires many
hardware and software resources. If these hardware and software resources
are not or less available, then it may result in system failure or causes
problems during performance. Between client and contractor, it is written
as a contract to define all requirements that are needed to be implemented
to increases productivity.
Interface specification:
In software engineering, an interface specification refers to a formal description of
how different software components, modules, or systems interact and communicate
with each other. It outlines the rules, protocols, data formats, and methods that govern
the exchange of information and functionality between these components. Interface
specifications play a crucial role in ensuring that diverse parts of a software system
can work together harmoniously, even if they are developed by different teams or
vendors.
Interface specifications provide a clear and unambiguous way to define how software
elements should interact. They ensure that each component knows what to expect
when communicating with other components and that potential misunderstandings or
compatibility issues are minimized.
2. Data Formats: Specifies the structure and format of the data that is exchanged
between components. This could involve defining data types, encodings, and any
necessary metadata.
3.Communication Protocols: Outlines the rules for communication, such as the order
in which messages are exchanged, how acknowledgments are handled, and any error
recovery mechanisms.
Software documentation:
Software documentation is a written piece of text that is often accompanied by a
software program. This makes the life of all the members associated with the project
easier. It may contain anything from API documentation, build notes or just help
content. It is a very critical process in software development.
It’s primarily an integral part of any computer code development method. Moreover,
computer code practitioners are a unit typically concerned with the worth, degree of
usage, and quality of the actual documentation throughout the development and its
maintenance throughout the total method. Motivated by the requirements of Novatel
opposition, a world-leading company developing package in support of worldwide
navigation satellite system, and based mostly on the results of a former systematic
mapping studies area unit aimed at a higher understanding of the usage and therefore
the quality of varied technical documents throughout computer code development and
their maintenance.
For example, before the development of any software product requirements are
documented which is called Software Requirement Specification (SRS).
Another example can be a user manual that a user refers to for installing, using, and
providing maintenance to the software application/product.
Types Of Software Documentation :
Purpose of Documentation :
Due to the growing importance of computer code necessities, the method of crucial
them needs to be effective so as to notice desired results. As the such determination of
necessities is often beneath sure regulation and pointers that area unit core in getting a
given goal.
These all imply that computer code necessities area unit expected to alter thanks to the
ever ever-changing technology within the world. however, the very fact that computer
code information id obtained through development has to be modified within the
wants of users and the transformation of the atmosphere area unit is inevitable.
what is more, computer code necessities ensure that there’s a verification and
therefore the testing method, in conjunction with prototyping and conferences there
are focus teams and observations?
For a software engineer reliable documentation is often a should the presence of
documentation helps keep track of all aspects of associate applications and it
improves the standard of wares, it’s the most focused area of unit development,
maintenance, and information transfer to alternative developers. productive
documentation can build info simply accessible, offer a restricted range of user entry
purposes, facilitate new users to learn quickly, alter the merchandise and facilitate
chopping out the price.
For a programmer reliable documentation is always a must the presence keeps track
of all aspects of an application and helps in keeping the software updated.
● the overall objective of the organization are covered and contributed by the
system or not.
● the implementation of the system be done using current technology or not.
● can the system be integrated with the other system which are already exist
1. Information assessment
2. Information collection
3. Report writing
4. General information
Along with this Feasibility study helps in identifying risk factors involved in
developing and deploying system and planning for risk analysis also narrows the
business alternatives and enhance success rate analyzing different parameters
associated with proposed project development.
Requirements elicitation:
Requirements elicitation is the process of gathering and defining the requirements
for a software system. The goal of requirements elicitation is to ensure that the
software development process is based on a clear and comprehensive understanding of
the customer’s needs and requirements. Requirements elicitation involves the
identification, collection, analysis, and refinement of the requirements for a software
system. It is a critical part of the software development life cycle and is typically
performed at the beginning of the project. Requirements elicitation involves
stakeholders from different areas of the organization, including business owners, end-
users, and technical experts. The output of the requirements elicitation process is a set
of clear, concise, and well-defined requirements that serve as the basis for the design
and development of the software system.
Requirements elicitation is perhaps the most difficult, most error-prone, and most
communication-intensive software development. It can be successful only through an
effective customer-developer partnership. It is needed to know what the users really
need.
There are a number of requirements elicitation methods. Few of them are listed below
–
1. Interviews
2. Brainstorming Sessions
3. Facilitated Application Specification Technique (FAST)
4. Quality Function Deployment (QFD)
5. Use Case Approach
The success of an elicitation technique used depends on the maturity of the analyst,
developers, users, and the customer involved.
1. Interviews:
2. Brainstorming Sessions:
● It is a group technique
● It is intended to generate lots of new ideas hence providing a platform to
share views
● A highly trained facilitator is required to handle group bias and group
conflicts.
● Every idea is documented so that everyone can see it.
● Finally, a document is prepared which consists of
the list of
requirements and their priority if possible.
Each participant prepares his/her list, different lists are then combined, redundant
entries are eliminated, team is divided into smaller sub-teams to develop mini-
specifications and finally a draft of specifications is written down using all the inputs
from the meeting.
1. Actor –
It is the external agent that lies outside the system but interacts with
it in some way. An actor maybe a person, machine etc. It is represented as a
stick figure. Actors can be primary actors or secondary actors.
● Primary actors – It requires assistance from the system to achieve
a goal.
● Secondary actor – It is an actor from which the system
needs assistance.
2. Use cases –
They describe the sequence of interactions between actors and the system.
They capture who(actors) do what(interaction) with the system. A complete
set of use cases specifies all possible ways to use the system.
development team.
and verifying the requirements with the stakeholders to ensure that they
accurately represent their needs and requirements.
implemented.
What is Traceability?
This technique involves tracing the requirements throughout the entire software
development life cycle to ensure that they are being met and that any changes are
tracked and managed.
The output of requirements validation is the list of problems and agreed-on actions of
detected problems. The lists of problems indicate the problem detected during the
process of requirement validation. The list of agreed actions states the corrective
action that should be taken to fix the detected problem. There are several techniques
that are used either individually or in conjunction with other techniques to check to
check entire or part of the system:
Dependence on the tool: The team should be well-trained on the tool and
its features to avoid dependency on the tool and not on the requirement.
●
Requirements management:
There are several key activities that are involved in requirements management,
including:
identifying the source of the change, assessing the impact of the change,
and approving or rejecting the change.
● Version control: This involves keeping track of different versions of the
requirements document and other related artifacts.
● Traceability: This involves linking the requirements to other elements of the
development process, such as design, testing, and validation.
● Communication: This involves ensuring that the requirements are
communicated effectively to all stakeholders and that any changes or issues
are addressed in a timely manner.
● Monitoring and reporting: This involves monitoring the progress of the
development process and reporting on the status of the requirements.
ADVANTAGES OF DISADVANTAGES:
Advantages:
● Helps ensure that the software being developed meets the needs and
expectations of the stakeholders
●
Disadvantages:
System models are representations that help software engineers understand, analyze,
and communicate different aspects of a software system. They provide abstractions
and visualizations that aid in the design, development, and documentation of software
projects. There are various types of system models used in software engineering:
Context Models:
Behavioral Models:
Use Case Diagrams: Use case diagrams show the interactions between users (actors)
and the system through specific use cases. They help identify the system's
functionalities from the user's perspective.
Activity Diagrams: Activity diagrams represent the flow of activities and actions
within the system. They are especially useful for modeling business processes and
workflows.
Data models focus on representing the structure and organization of data within the
software system. They help in designing databases, defining data relationships, and
ensuring data integrity. Data models include:
-Class Diagrams: Class diagrams describe the structure of classes, their attributes,
methods, and relationships in an object-oriented system.
Object Models:
Structured Methods:
1.Requirement Analysis: Understand and refine the user requirements to identify the
functionalities, constraints, and qualities that the software needs to have.
2.Architectural Design: Create a high-level structure for the software system. This
involves defining components, their relationships, and the overall organization of the
system.
7.Code Generation: Translate the design specifications into actual code using
appropriate programming languages.
8.Testing and Verification: Validate the design by testing the software against the
requirements and design specifications. This includes unit testing, integration testing,
and more.
9.Design Review: Conduct design reviews to gather feedback from stakeholders and
ensure that the design meets the intended goals.
Design quality refers to the characteristics of a software design that determine its
effectiveness, maintainability, and overall value. A well-designed software system
exhibits certain qualities that contribute to its success throughout its lifecycle. Some
important aspects of design quality include:
8. Security and Reliability: The design should address security concerns and ensure
the reliability of the software under various conditions.
Design concepts:
Design concepts in software engineering refer to fundamental principles and
guidelines that help developers create effective, maintainable, and reliable software
solutions. These concepts provide a framework for making design decisions and
ensuring that the resulting software meets user requirements and quality standards.
Here are some key design concepts in software engineering:
6. **High Cohesion:** High cohesion means that elements within a module are
closely related and contribute to a single, well-defined purpose. Modules with high
cohesion are easier to understand and maintain.
10. **Liskov Substitution Principle (LSP):** The LSP states that objects of a
derived class should be able to replace objects of the base class without affecting
the correctness of the program. It ensures that inheritance hierarchies maintain
expected behavior.
11. **Interface Segregation Principle (ISP):** The ISP recommends that clients
should not be forced to depend on interfaces they don't use. It encourages
designing small, focused interfaces instead of large, all-encompassing ones.
12. **Dependency Inversion Principle (DIP):** The DIP states that high-level
modules should not depend on low-level modules; both should depend on
abstractions. It encourages the use of interfaces or abstract classes to decouple
components.
13. **Design Patterns:** Design patterns are reusable solutions to common design
problems. They provide well-established approaches for addressing various design
challenges and promote best practices.
14. **SOLID Principles:** The SOLID principles are a set of five design
principles (Single Responsibility, Open-Closed, Liskov Substitution, Interface
Segregation, Dependency Inversion) that guide the creation of maintainable and
extensible software.
Applying these design concepts helps software engineers create systems that are
flexible, maintainable, and robust while minimizing the risk of introducing errors or
making the system unnecessarily complex.
Design model:
The design model in software engineering refers to a representation or blueprint that
describes the architecture, structure, and behavior of a software system. It provides a
visual and conceptual overview of how different components, modules, and
functionalities of the system are organized and interact with each other. The design
model serves as a guide for developers during the implementation phase and helps
ensure that the software meets its requirements while adhering to good design
practices.
There are several types of design models used in software engineering, each focusing
on a different aspect of the software system:
4. **Data Flow Models:** Data flow models depict the flow of data and
information within the system. They help visualize how data is processed,
transformed, and exchanged between components. Data flow diagrams (DFDs) and
entity-relationship diagrams (ERDs) are examples of data flow models.
5. **User Interface (UI) Design Models:** UI design models focus on the visual
and interactive aspects of the software's user interface. They outline how users will
interact with the system, including layout, navigation, and visual elements.
In the design phase of Software Development Life Cycle the software architecture is
defined and documented. So in this article we will clearly discuss about one of
significant element of Software Development Life Cycle (SDLC) i.e the Software
Architecture.
Software Architecture :
Software Architecture defines fundamental organization of a system and more simply
defines a structured solution. It defines how components of a software system are
assembled, their relationship and communication between them. It serves as a
blueprint for software application and development basis for developer team.
Software architecture defines a list of things which results in making many things
easier in the software development process.
S.O.L.I.D PRINCIPLE
1. Single Responsibility –
2. Open-Closed Principle –
Software should be divided into such microservices there should not be any
redundancies.
Besides all these software architecture is also important for many other factors like
quality of software, reliability of software, maintainability of software, Supportability
of software and performance of software and so on.
From above it’s clear how much important a software architecture for the
development of a software application. So a good software architecture is also
responsible for delivering a good quality software product.
Data architecture design is set of standards which are composed of certain policies,
rules, models and standards which manages, what type of data is collected, from
where it is collected, the arrangement of collected data, storing that data, utilizing and
securing the data into the systems and data warehouses for further analysis.
Data is one of the essential pillars of enterprise architecture through which it succeeds
in the execution of business strategy.
Data architecture also describes the type of data structures applied to manage data and
it provides an easy way for data preprocessing. The data architecture is formed by
dividing into three essential models and then are combined :
● Conceptual model –
It is a business model which uses Entity Relationship (ER) model for
relation between entities and their attributes.
● Logical model –
It is a model where problems are represented in the form of logic such as
rows and column of data, classes, xml tags and other DBMS
techniques.
● Physical model –
Physical models holds the database design like which type of
database technology will be suitable for architecture.
A data architect is responsible for all the design, creation,deployment of data
architecture and defines how data is to be stored and retrieved, other decisions are
made by internal bodies.
● Business requirements –
These include factors such as the expansion of business, the performance of
the system access, data management, transaction management, making use
of raw data by converting them into image files and records, and then
storing in data warehouses. Data warehouses are the main aspects of storing
transactions in
business.
● Business policies –
The policies are rules that are useful for describing the way of processing
data. These policies are made by internal organizational bodies and other
government agencies.
● Technology in use –
This includes using the example of previously completed data architecture
design and also using existing licensed software
purchases, database technology.
● Business economics –
The economical factors such as business growth and loss, interest rates,
loans, condition of the market, and the overall cost will also have an effect
on design architecture.
● Data processing needs –
These include factors such as mining of the data, large continuous
transactions, database management, and other data preprocessing needs.
Data Management :
Architectural Patterns
The architectural pattern shows how a solution can be used to solve a reoccurring
problem. In another word, it reflects how a code or components interact with each
other. Moreover, the architectural pattern is describing the architectural style of our
system and provides solutions for the issues in our architectural style. Personally, I
prefer to define architectural patterns as a way to implement our architectural style.
For example: how to separate the UI of the data module in our architectural style?
How to integrate a third-party component with our system? how many tires will we
have in our client-server architecture? Examples of architectural patterns are
microservices, message bus, service requester/ consumer, MVC, MVVM,
microkernel, n-tier, domain-driven design, and presentation-abstraction-control.
Design Patterns
Design patterns are accumulative best practices and experiences that software
professionals used over the years to solve the general problem by – trial and error –
they faced during software development. The Gang of Four (GOF, refers to Eric
Gamma, Richard Helm, Ralf Johnson, and John Vlissides) wrote a book in 1994 titled
with “Design Pattern – Elements of reusable object-oriented software” in which they
suggested that design patterns are based on two main principles of object-oriented
design:
Also, they presented that the design patterns set contains 23 patterns and categorized
into three main sets:
Provide a way to create objects while hiding the creation logic. Thus, the object
creation is to be done without instantiating objects directly with the “New” keyword
to gives the flexibility to decide which objects need to be created for a given use case.
The creational design patterns are:
2. Structural patterns:
Concerned with class and object composition. The Structural design patterns are:
There are two more subsets of design pattern can be added to the 3 categories of
design pattern:
The architectural style is a 10000-helicopter view of the system. It shows the system
design at the highest level of abstraction. It also shows the high-level module of the
application and how these modules are interacting. On the other hand, architectural
patterns have a huge impact on system implementation horizontally and vertically.
Finally, the design patterns are used to solve localized issues during the
implementation of the software. Also, it has a lower impact on the code than the
architectural patterns since the design pattern is more concerned with a specific
portion of code implementation such as initializing objects and communication
between objects.
Architectural Design:
Introduction: The software needs the architectural design to represents the design of
software. IEEE defines architectural design as “the process of defining a collection of
hardware and software components and their interfaces to establish the framework for
the development of a computer system.” The software that is built for computer-based
systems can exhibit one of these many architectural styles.
Each style will describe a system category that consists of :
The use of architectural styles is to establish a structure for all the components of the
system.
● A data store will reside at the center of this architecture and is accessed
frequently by the other components that update, add, delete or modify the
data present within the store.
● The figure illustrates a typical data centered style. The client software
access a central repository. Variation of this approach are used to transform
the repository into a blackboard when data related
to client or data of interest for the client change the notifications to client
software.
● This data-centered architecture will promote integrability. This means that
the existing components can be changed and new client components can be
added to the architecture without the permission or concern of other clients.
● Data can be passed among clients using blackboard mechanism.
● This kind of architecture is used when input data is transformed into output
data through a series of computational manipulative components.
● The figure represents pipe-and-filter architecture since it uses both pipe
and filter and it has a set of components called filters connected by lines.
● Pipes are used to transmitting data from one component to the next.
● Each filter will work independently and is designed to take data input of a
certain form and produces data output to the next filter of a specified form.
The filters don’t require any knowledge of the working
of neighboring filters.
● If the data flow degenerates into a single line of transforms, then it is
termed as batch sequential. This structure accepts the batch of data and
then applies a series of sequential components to transform it.
Note – Language, Model, and Unified are the important aspect of UML as described
in the map above.
1. Language:
2. Model:
● It is a representation of a subject.
● It captures a set of ideas (known as abstractions) about its subject.
3. Unified:
A Conceptual Model:
A conceptual model of the language underlines the three major elements:
• The Rules
Once you understand these elements, you will be able to read and recognize the
models as well as create some of them.
Building Blocks:
The vocabulary of the UML encompasses three kinds of building blocks:
Things:
Things are the abstractions that are first-class citizens in a model; relationships tie
these things together; diagrams group interesting collections of things.
There are 4 kinds of things in the UML:
1. Structural things
2. Behavioral things
3. Grouping things
4. Annotational things
These things are the basic object-oriented building blocks of the UML. You use them
to write well-formed models.
Relationships:
There are 4 kinds of relationships in the UML:
1. Dependency
2. Association
3. Generalization
4. Realization
These relationships are the basic relational building blocks of the UML.
Diagrams:
It is the graphical presentation of a set of elements. It is rendered as a connected graph
of vertices (things) and arcs (relationships).
1. Class diagram
2. Object diagram
4. Sequence diagram
5. Collaboration diagram
6. Statechart diagram
7. Activity diagram
Rules:
The UML has a number of rules that specify what a well-formed model should look
like. A well-formed model is one that is semantically self-consistent and in harmony
with all its related models.
The UML has semantic rules for:
Common Mechanisms:
The UML is made simpler by the four common mechanisms. They are as
follows:
1. Specifications
2. Adornments
3. Common divisions
4. Extensibility mechanisms
Class Diagrams:
UML class diagrams: Class diagrams are the main building blocks of every object-
oriented method. The class diagram can be used to show the classes, relationships,
interface, association, and collaboration. UML is standardized in class diagrams.
Since classes are the building block of an application that is based on OOPs, so as the
class diagram has an appropriate structure to represent the classes, inheritance,
relationships, and everything that OOPs have in their context. It describes various
kinds of objects and the static relationship between them.
The main purpose to use class diagrams are:
● This is the only UML that can appropriately depict various aspects of the
OOPs concept.
● Proper design and analysis of applications can be faster and
efficient.
● It is the base for deployment and component diagram.
There are several software available that can be used online and offline to draw these
diagrams Like Edraw max, lucid chart, etc. There are several points to be kept in
focus while drawing the class diagram. These can be said as its syntax:
Below is the example of Animal class (parent) having two child class as dog and cat
both have object d1, c1 inheriting properties of the parent class.
Sequence Diagrams:
Sequence Diagrams – A sequence diagram simply depicts interaction between
objects in a sequential order i.e. the order in which these interactions take place. We
can also use the terms event diagrams or event scenarios to refer to a sequence
diagram. Sequence diagrams describe how and in what order the objects in a system
function. These diagrams are widely used by businessmen and software developers to
document and understand requirements for new and existing systems.
interacts with the system and its objects. It is important to note here that an
messages using arrows. Lifelines and messages form the core of a sequence
Figure – reply
messageFor example – Consider the scenario where the device
requests a photo from the user. Here the message which shows
the photo being sent is a reply message.
Figure –
a scenario where a reply message is used
● Found Message – A Found message is used to represent a
scenario where an unknown source sends the message. It is
represented using an arrow directed towards a lifeline from an
end point. For example: Consider the scenario of a hardware
failure.
● Figure – found message.It can be due to multiple reasons and we
are not certain as to what caused the hardware
failure.
Figure – a scenario where found message is used
● Lost Message – A Lost message is used to represent a
scenario where the recipient is not known to the system. It is
represented using an arrow directed towards an end point from a
lifeline. For example: Consider a scenario where a warning is
generated.
Figure – lost
messageThe warning might be generated for the user or
other software/object that the lifeline is interacting with. Since
the destination is not known before hand, we use the Lost
Message symbol.
portray the dynamic behaviour of a particular use case and define the role of each
object.
carry out the functionality of an interaction. Then build a model using the
relationships between those elements. Several vendors offer software for creating
of the system in real time. The four major components of a collaboration diagram
2. Actors. These are instances that invoke the interaction in the diagram.
Each actor has a name and a role, with one actor initiating the entire use
case.
3. Links. These connect objects with actors and are depicted using a solid
line between two elements. Each link is an instance where messages can
be sent.
convey information about the activity and can include the sequence
number.
The most important objects are placed in the centre of the diagram, with all other
participating objects branching off. After all objects are placed, links and
system.
and messages grows, a collaboration diagram can become difficult to read and use
In UML the two types of interaction diagrams are collaboration and sequence
diagrams. While both types use similar information, they display them in separate
their interactions. On the other hand, sequence diagrams focus on the order of
messages that flow between objects. In most scenarios, a single figure is not
sufficient in describing the behavior of a system and both figures are required.
Use Case Diagram
Use case diagrams referred as a Behavior model or diagram. It simply describes and
displays the relation or interaction between the users or customers and providers of
application service or the system. It describes different actions that a system performs
in collaboration to achieve something with one or more users of the system. Use case
diagram is used a lot nowadays to manage the system.
Here, we will understand the designing use case diagram for the library management
system. Some scenarios of the system are as follows :
Example –
Following is a component diagram for the ‘On-line Course Registration’ system. This
diagram shows conceptual view of server-side components.
Advantages :
Disadvantages :
Strategies
Software testing is a critical phase in the software development lifecycle that aims to
identify defects, ensure quality, and validate that the software meets its requirements.
A strategic approach to software testing involves planning, designing, and executing
tests systematically to achieve optimal results. It ensures that testing efforts are well-
organized, efficient, and aligned with project goals.
2. Test Planning: Develop a comprehensive test plan that outlines testing goals,
scope, resources, schedules, and test objectives.
4. Test Design: Design test cases that are effective, efficient, and cover various
use cases. Prioritize tests based on risk and criticality.
5. Automation:Utilize test automation to execute repetitive and time-consuming
tests, freeing up manual testers to focus on exploratory
testing.
9. Test Data Management: Plan and manage test data to cover a wide range of
scenarios and ensure realistic testing.
Unit Testing : The unit test focuses on the internal processing logic and data
structures within the boundaries of a component. This type of testing can be
conducted in parallel for multiple components.
Unit-test considerations:-
1. The module interface is tested to ensure proper information flows (into and
out).
2. Local data structures are examined to ensure temporary data store during
execution.
3. All independent paths are exercised to ensure that all statements in a module have
been executed at least once.
4. Boundary conditions are tested to ensure that the module operates properly at
boundaries. Software often fails at its boundaries.
5. All error-handling paths are tested. If data do not enter and exit properly, all other
tests are controversial.
Among the potential errors that should be tested when error handling is evaluated are:
(5) Error description does not provide enough information to assist in the location of
the cause of the error
Unit-test procedures:-
The design of unit tests can occur before coding begins or after source code has been
generate. Because a component is not a stand-alone program, driver and/or stub
software must often be developed for each unit test. Driver is nothing more than a
“main program” that accepts test case data, passes such data to the component (to be
tested), and prints relevant results. Stubs serve to replace modules that are subordinate
(invoked by) the component to be tested. A stub may do minimal data manipulation,
prints verification of entry, and returns control to the module undergoing testing.
Drivers and stubs represent testing “overhead.” That is, both are software that must be
written (formal design is not commonly applied) but that is not delivered with the
final software product
can have an inadvertent, adverse effect on another; sub functions, when combined,
may not produce the desired major function. The objective of Integration testing is to
take unit-tested components and build a program structure that has been dictated by
design. The program is constructed and tested in small increments, where errors are
easier to isolate and correct. A number of different incremental integration strategies
are:- a) Top-down integration testing is an incremental approach to construction of the
software architecture. Modules are integrated by moving downward through the
control hierarchy. Modules subordinate to the main control module are incorporated
into the structure in either a depth-first or breadth-first manner. The integration
process is performed in a series of five steps:
1. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
4. On completion of each set of tests, another stub is replaced with the real
component.
5. Regression testing may be conducted to ensure that new errors have not been
introduced. The top-down integration strategy verifies major control or decision
points early in the test process. Stubs replace low-level modules at the beginning of
top-down testing. Therefore, no significant data can flow upward in the program
structure. As a tester, you are left with three choices:
(1) Delay many tests until stubs are replaced with actual modules,
(2) Develop stubs that perform limited functions that simulate the actual module, or
(3) Integrate the software from the bottom of the hierarchy upward.
the lowest levels in the program structure. Because components are integrated from
the bottom up, the functionality provided by components subordinate to a given level
is always available and the need for stubs is eliminated. A bottom-up integration
strategy may be implemented with the following steps:
1. Low-level components are combined into clusters (sometimes called builds) that
perform a specific software sub function.
2. A driver (a control program for testing) is written to coordinate test case input and
output.
mechanism for time-critical projects, allowing the software team to assess the
project on a frequent basis. In essence, the smoke-testing approach
encompasses the following activities:
1. Software components that have been translated into code are integrated into a
build. A build includes all data files, libraries, reusable modules, and engineered
components that are required to implement one or more product
functions.
2. A series of tests is designed to expose errors that will keep the build from
properly performing its function. The intent should be to uncover “showstopper”
errors that have the highest likelihood of throwing the software project behind
schedule.
3. The build is integrated with other builds, and the entire product is smoke tested
daily. The integration approach may be top down or bottom up. Smoke testing
provides a number of benefits when it is applied on complex, time
critical software projects: • Integration risk is minimized. Because smoke tests are
conducted daily, incompatibilities and other show-stopper errors are uncovered early,
• The quality of the end product is improved. Smoke testing is likely to uncover
functional errors as well as architectural and
component-level design errors. • Error diagnosis and correction are simplified. Errors
uncovered during smoke testing are likely to be associated with “new software
increments”—that is, the software that has just been added to the build(s) is a probable
cause of a newly discovered error. • Progress is easier to assess. With each passing
day, more of the software has been integrated and more has been demonstrated to
work. This improves team morale and gives managers a good indication that progress
is being made. Strategic options:- The major disadvantage of the top-down approach
is the need for stubs and the attendant testing difficulties that can be associated with
them. The major disadvantage of bottom-up integration is that “the program as an
entity does not exist until the last module is added”.
Software Testing:
Software Testing can be majorly classified into two categories:
tested.
Black box testing and white box testing are two different approaches to
software testing, and their differences are as
follows:
Black box testing is a testing technique in which the internal workings of the software
are not known to the tester. The tester only focuses on the input and output of the
software. Whereas, White box testing is a testing technique in which the tester has
knowledge of the internal workings of the software, and can test individual code
snippets, algorithms and methods.
Testing objectives: Black box testing is mainly focused on testing the functionality of
the software, ensuring that it meets the requirements and specifications. White box
testing is mainly focused on ensuring that the internal code of the software is correct
and efficient.
Knowledge level: Black box testing does not require any knowledge of the internal
workings of the software, and can be performed by testers who are not familiar with
programming languages. White box testing requires knowledge of programming
languages, software architecture and design
patterns.
Scope: Black box testing is generally used for testing the software at the functional
level. White box testing is used for testing the software at the unit level, integration
level and system level.
Black box testing is easy to use, requires no programming knowledge and is effective
in detecting functional issues. However, it may miss some important internal defects
that are not related to functionality. White box testing is effective in detecting internal
defects, and ensures that the code is efficient and maintainable. However, it requires
programming knowledge and can be
time-consuming.
In conclusion, both black box testing and white box testing are important for software
testing, and the choice of approach depends on the testing
objectives, the testing stage, and the available resources.
Differences between Black Box Testing vs White Box Testing:
This testing can be initiated based on This type of testing of software is started
the requirement specifications after a detail design document.
document.
It is also known as static testing, where we are ensuring that "we are developing the right
product or not". And it also checks that the developed application fulfilling all the
requirements given by the client.
Validation Testing:
Validation testing is testing where tester performed functional and non-functional testing.
Here functional testing includes Unit Testing (UT), Integration Testing (IT) and System
Testing (ST), and non-functional testing includes User acceptance testing (UAT).
Validation testing is also known as dynamic testing, where we are ensuring that "we have
developed the product right." And it also checks that the software meets the business needs
of the client.
Note: Verification and Validation process are done under the V model of the software
development life cycle.
work-products (not the final product) of a during or at the end of the development
development cycle to decide whether the cycle to decide whether the software follow
requirements. requirements.
The execution of code does not happen In validation testing, the execution of code
happens.
in the verification testing.
In verification testing, we can find the bugs In the validation testing, we can find those
early in the development phase of the
product. bugs, which are not caught in the
verification process.
Verification testing is executed by the Validation testing is executed by the testing
Quality assurance team to make sure that team to test the application.
the product is developed according to
customers' requirements.
Verification is done before the validation After verification testing, validation testing
testing. takes place.
In this type of testing, we can verify that the In this type of testing, we can validate that
inputs follow the outputs or not. the user accepts the product or not.
System Testing:
INTRODUCTION:
integrated system to evaluate the compliance of the system with the corresponding
input. The goal of integration testing is to detect any irregularity between the units
that are integrated together. System testing detects defects within both the integrated
units and the whole system. The result of system testing is the observed behaviour of
a component or a system when it is tested. System Testing is carried out on the whole
requirement specifications or in the context of both. System testing tests the design
and behaviour of the system and also the expectations of the customer. It is performed
to test the system beyond the bounds mentioned in the software requirements
black-box testing. System Testing is performed after the integration testing and
1. JMeter
2. Gallen Framework
3. Selenium
1. HP Quality Center/ALM
2. IBM Rational Quality Manager
3. Microsoft Test Manager
4. Selenium
5. Appium
6. LoadRunner
7. Gatling
8. JMeter
9. Apache JServ
10. SoapUI
Note: The choice of tool depends on various factors like the technology
used, the size of the project, the budget, and the testing
requirements.
2. Debugging Tools: There are various tools available for debugging such as
debuggers, trace tools, and profilers that can be used to identify and resolve
bugs.
5. System Testing: This involves testing the entire software system to identify
bugs or errors.
bugs or errors.
7. Logging: This involves recording events and messages related to the
It is important to note that debugging is an iterative process, and it may take multiple
attempts to identify and resolve all bugs in a software system. Additionally, it is
important to have a well-defined process in place for reporting and tracking bugs, so
that they can be effectively managed and resolved.
In the context of software engineering, debugging is the process of fixing a bug in the
software. In other words, it refers to identifying, analyzing, and removing errors. This
activity begins after the software fails to execute properly and concludes by solving
the problem and successfully testing the software. It is considered to be an extremely
complex and tedious task because errors need to be resolved at all stages of
debugging.
Later, the person performing debugging may suspect a cause, design a test case to
help validate that suspicion and work toward error correction in an
iterative fashion.
Debugging Approaches/Strategies:
1. Brute Force: Study the system for a larger duration in order to understand
the program backward from the location of the failure message in order to
identify the region of faulty code. A detailed study of the region is
conducted to find the cause of defects.
4. Using past experience with the software debug the software with similar
9. Logging and Tracing: Using logging and tracing tools to identify the
assist in the debugging process. These tools can include static and dynamic
analysis tools, as well as tools that use machine learning and artificial
intelligence to identify errors and
suggest fixes.
Debugging Tools:
Debugging tool is a computer program that is used to test and debug other programs.
A lot of public domain software like gdb and dbx are available for debugging. They
offer console-based command-line interfaces. Examples of automated debugging tools
include code-based tracers, profilers, interpreters,
etc. Some of the widely used debuggers are:
● Radare2
● WinDbg
● Valgrind
Debugging is different from testing. Testing focuses on finding bugs, errors, etc
whereas debugging starts after a bug has been identified in the software. Testing is
used to ensure that the program is correct and it was supposed to do with a certain
minimum success rate. Testing can be manual or automated. There are several
different types of testing unit testing, integration testing, alpha, and beta testing, etc.
Debugging requires a lot of knowledge, skills, and expertise. It can be supported by
some automated tools available but is more of a manual process as every bug is
different and requires a different technique, unlike a pre-defined testing mechanism.
Advantages of Debugging:
the development process, it can save time and resources that would
otherwise be spent on fixing bugs later in the
the software as it becomes easy to identify and fix bugs that would have
been caused by the changes.
and specifications.
Disadvantages of Debugging:
While debugging is an important aspect of software engineering, there are also some
disadvantages to consider:
if the bug is difficult to find or reproduce. This can cause delays in the
development process and add to the overall cost of the
project.
for developers who are not familiar with the tools and techniques used in
debugging.
them.
6. Limited insight: In some cases, debugging tools can only provide limited
Product metrics:
Software Quality
Software quality product is defined in term of its fitness of purpose. That is, a quality product
does precisely what the users want it to do. For software products, the
fitness of use is generally explained in terms of satisfaction of the requirements laid down in
the SRS document. Although "fitness of purpose" is a satisfactory interpretation of quality
for many devices such as a car, a table fan, a grinding machine, etc.for software products,
"fitness of purpose" is not a wholly satisfactory
definition of quality.
The modern view of a quality associated with a software product several quality methods
such as the following:
Portability: A software device is said to be portable, if it can be freely made to work in
various operating system environments, in multiple machines, with other software products,
etc.
Usability: A software product has better usability if various categories of users can easily
invoke the functions of the product.
Reusability: A software product has excellent reusability if different modules of the product
can quickly be reused to develop new products.
● Project Auditing
● Review of the quality system
● It helps in the development of methods and guidelines
Evolution of Quality Management System
Quality Systems are basically evolved over the past some years. The evolution of a
Quality Management System is a four-step process.
The main task of quality control is to detect defective devices and it also helps in
finding the cause that leads to the defect. It also helps in the correction of bugs.
Total Quality Management(TQM) checks and assures that all the procedures must be
continuously improved regularly through process measurements.
1. Structural Complexity –
2. Data Complexity –
3. System Complexity –
Cyclomatic complexity= E - N + 2
Metrics:
Software metrics will be useful only if they are characterized effectively and validated
so that their worth is proven. There are 4 functions related to software metrics:
1. Planning
2. Organizing
3. Controlling
4. Improving
language.
product, tracing risks and undercover prospective problem areas. The ability
productivity.
● Number of software developer
● Staffing patterns over the life cycle of software
● Cost and schedule
● Productivity
Software measurement
considered.
formulated metrics.
tools.
4. Interpretation: The evaluation of metrics resulting in insight into the
In Software engineering Software Quality Assurance (SAQ) assures the quality of the
software. A set of activities in SAQ is continuously applied throughout the software
process. Software Quality is measured based on some software quality metrics.
1. Code Quality
2. Reliability
3. Performance
4. Usability
5. Correctness
6. Maintainability
7. Integrity
8. Security
1. Code Quality – Code quality metrics measure the quality of code used for
software project development. Maintaining the software code quality by writing Bug-
free and semantically correct code is very important for good software project
development. In code quality, both Quantitative metrics like the number of lines,
complexity, functions, rate of bugs generation, etc, and Qualitative metrics like
readability, code clarity, efficiency, and maintainability, etc are measured.
8. Security – Security metrics measure how secure the software is. In the age of
cyber terrorism, security is the most essential part of every software. Security assures
that there are no unauthorized changes, no fear of cyber attacks, etc when the software
product is in use by the end-user.
Risk management
1. Reactive RCA :
The main question that arises in reactive RCA is “What went wrong?”. Before
investigating or identifying the root cause of failure or defect, failure needs to be in
place or should be occurred already. One can only identify the root cause and perform
the analysis only when problem or failure had occurred that causes malfunctioning in
the system. Reactive RCA is a root cause analysis that is performed after the
occurrence of failure or defect.
It is simply done to control, implemented to reduce the impact and severity of defect
that has occurred. It is also known as reactive risk management. It reacts quickly as
soon as problem occurs by simply treating symptoms. RCA is generally reactive but it
has the potential to be proactive. RCA is reactive at initial and it can only be proactive
if one addresses and identifies small things too that can cause problem as well as
exposes hidden causes of the problem.
Advantages :
Disadvantages :
2. Proactive RCA :
The main question that arises in proactive RCA is “What could go wrong?”.
RCA can also be used proactively to mitigate failure or risk. The main importance of
RCA can be seen when it is applied to events that have not occurred yet. Proactive
RCA is a root cause analysis that is performed before any occurrence of failure or
defect. It is simply done to control, implemented to prevent defect from its
occurrence. As both reactive and proactive RCAs are is important, one should move
from reactive to proactive RCA.
It is better to prevent issues from its occurrence rather than correcting it after its
occurrence. In simple words, Prevention is better than correction. Here, prevention
action is considered as proactive and corrective action is considered as reactive. It is
also known as proactive risk management. It identifies the root cause of problem to
eliminate it from reoccurring. With help of proactive RCA, we can identify the main
root cause that leads to the occurrence of problem or failure, or defect. After knowing
this, we can take various measures and implement actions to prevent these causes
from the occurrence.
Advantages :
Disadvantages :
Uncertainty- the risk may or may not happen that means there are no 100%
risks. loss – If the risk occurs in reality , undesirable result or losses will occur.
● Risk Identification
● Risk analysis
● Risk Planning
● Risk Monitoring
A computer code project may be laid low with an outsized sort of risk. so as
to be ready to consistently establish the necessary risks which could have
an effect on a computer code project, it’s necessary to reason risks into
completely different categories. The project manager will then examine the
risks from every category square measure relevant to the project.
There are mainly 3 classes of risks that may have an effect on a
computer code project:
1. Project Risks:
2. Technical Risks:
3. Business Risks:
This type of risk embodies the risks of building a superb product that
nobody needs, losing monetary funds or personal commitments, etc.
● What if the project cost escalates and overshoots what was estimated? –
Project Risk
● What if the mobile phones that are developed become too bulky in size to
conveniently carry? Business Risk
● What if call hand-off between satellites becomes too difficult to implement?
Technical Risk
2. Brainstorming – This technique provides and gives free and open approach that
usually encourages each and everyone on project team to participate. It also results
in greater sense of ownership of project risk, and team generally committed to
managing risk for given time period of project. It is creative and unique technique
to gather risks spontaneously by team members. The team members identify and
determine risks in ‘no wrong answer’ environment. This technique also provides
opportunity for team members to always develop on each other’s ideas. This
technique is also used to determine best possible solution to problems and issue
that arises and emerge.
Introduction:
Risk projection in software engineering involves assessing potential risks that could
impact the success of a software project in the future. It goes beyond identifying
current risks and aims to predict how those risks might evolve over time. By
anticipating the progression of risks, development teams can take proactive measures
to mitigate or manage them effectively.
1.Identify Current Risks: Start by identifying and analyzing existing risks that
could impact the project. These risks could include technical challenges, scope
changes, resource constraints, and more.
3. Determine Risk Trends: Study historical data and patterns to identify how
risks have evolved in similar past projects. This can provide insights into how risks
might progress in the current project.
4. Consider External Influences:Take into account external factors such as
market trends, technological advancements, regulatory changes, and economic shifts
that could influence the project's risks.
8. Update Risk Management Plan: Update the risk management plan based
on the projected risks and strategies. Ensure that the plan remains relevant and
adaptable to changing circumstances.
5. Enhanced Control: Teams gain better control over the project by anticipating
challenges and having strategies in place to address them.
Introduction:
5. Data Collection:Gather data and feedback from the project team, stakeholders,
and other relevant sources to refine the understanding of each
risk.
In some software teams, risk is documented with the help of a Risk Information Sheet
(RIS). This RIS is controlled by using a database system for easier management of
information i.e creation, priority ordering, searching, and other analysis. After
documentation of RMMM and start of a project, risk mitigation and monitoring steps
will start.
Risk Mitigation :
It is an activity used to avoid problems (Risk Avoidance). Steps for
mitigating the risks as follows.
Risk Monitoring :
It is an activity used for project tracking.
It has the following primary objectives as follows.
Example:
Let us understand RMMM with the help of an example of high staff turnover.
Risk Mitigation:
To mitigate this risk, project management must develop a strategy for reducing
turnover. The possible steps to be taken are:
● Meet the current staff to determine causes for turnover (e.g., poor working
conditions, low pay, competitive job market).
● Mitigate those causes that are under our control before the project
starts.
● Once the project commences, assume turnover will occur and develop
techniques to ensure continuity when people leave.
● Organize project teams so that information about each development
activity is widely dispersed.
● Define documentation standards and establish mechanisms to ensure that
documents are developed in a timely manner.
● Assign a backup staff member for every critical technologist.
Risk Monitoring:
As the project proceeds, risk monitoring activities commence. The project manager
monitors factors that may provide an indication of whether the risk is becoming more
or less likely. In the case of high staff turnover, the following factors can be
monitored:
Risk Management:
Risk management and contingency planning assumes that mitigation efforts have
failed and that the risk has become a reality. Continuing the example, the project is
well underway, and a number of people announce that they will be leaving. If the
mitigation strategy has been followed, backup is available, information is
documented, and knowledge has been dispersed across the team. In addition, the
project manager may temporarily refocus resources (and readjust the project
schedule) to those functions that are fully staffed, enabling newcomers who must be
added to the team to “get up to the speed“.
Drawbacks of RMMM:
Quality Management:
Generally the quality of the software is verified by the third party organization like
international standard organizations.
● software’s portability
● software’s usability
● software’s reusability
● software’s correctness
● software’s maintainability
● software’s error control
Make a plan for how you will carry out the sqa through out the
project. Think about which set of software engineering activities are
the best for project. check level of sqa team skills.
Disadvantage of SQA:
There are a number of disadvantages of quality assurance. Some of them include
adding more resources, employing more workers to help maintain quality and so
much more.
Software Review
Software Review is systematic inspection of a software by one or more individuals
who work together to find and resolve errors and defects in the software during the
early stages of Software Development Life Cycle (SDLC). Software review is an
essential part of Software Development Life Cycle (SDLC) that helps software
engineers in validating the quality, functionality and other vital features and
components of the software. It is a whole process that includes testing the software
product and it makes sure that it meets the requirements stated by the client.
Usually performed manually, software review is used to verify various documents like
requirements, system designs, codes, test plans and test
cases.
Peer review is the process of assessing the technical content and quality of
the product and it is usually conducted by the author of the work product
along with some other developers.
Peer review is performed in order to examine or resolve the defects in the
software, whose quality is also checked by other members of the team.
Peer Review has following types:
● (i) Code Review:
Computer source code is examined in a systematic way.
● (iii) Walkthrough:
Members of the development team is guided by author and
other interested parties and the participants ask questions and
make comments about defects.
In addition, the purpose of FTR is to enable junior engineer to observe the analysis,
design, coding and testing approach more closely. FTR also works to promote back up
and continuity become familiar with parts of software they might not have seen
otherwise. Actually, FTR is a class of reviews that include walkthroughs, inspections,
round robin reviews and other small group technical assessments of software. Each
FTR is conducted as meeting and is considered successful only if it is properly
planned, controlled and attended.
Example:
suppose during the development of the software without FTR design cost 10 units,
coding cost 15 units and testing cost 10 units then the total cost till now is 35 units
without maintenance but there was a quality issue because of bad design so to fix it
we have to re design the software and final cost will become 70 units. that is why FTR
is so helpful while developing the software.
The review meeting: Each review meeting should be held considering the following
constraints- Involvement of people:
At the end of the review, all attendees of FTR must decide what to do.
The decision was made, with all FTR attendees completing a sign-of
indicating their participation in the review and their agreement with the findings of
the review team.
1. During the FTR, the reviewer actively records all issues that have
been raised.
2. At the end of the meeting all these issues raised are consolidated
and a review list is prepared.
3. Finally, a formal technical review summary report is prepared.
2. Who reviewed it ?
3. What were the findings and conclusions ?
Review guidelines :- Guidelines for the conducting of formal technical reviews
should be established in advance. These guidelines must be
distributed to all reviewers, agreed upon, and then followed. A review that is
unregistered can often be worse than a review that does not minimum set of
guidelines for FTR.
Introduction
2. **Data Collection:** Gather relevant data about the software development and
testing processes, including defect counts, test results, and other
performance metrics.
4. **Quality Metrics:** Define and use quality metrics to quantify the quality
attributes of software products, such as reliability, performance, and
maintainability.
5.
**Root Cause Analysis:** Use statistical analysis to identify the root causes of
defects and process inefficiencies, enabling targeted corrective actions.
1. **Defect Analysis:** Analyze defect data to identify trends, patterns, and critical
areas for improvement. Techniques like Pareto analysis help prioritize issues.
Software Reliability:
Software Reliability means Operational reliability. It is described as the ability of a system
or component to perform its required functions under static conditions for a specific period.
Software reliability is also defined as the probability that a software system fulfills its
assigned task in a given environment for a predefined number of input cases, assuming that
the hardware and the input are free of error.
● Document control –
All documents concerned with the development of a software
product should be properly managed and controlled.
● Planning –
Proper plans should be prepared and monitored.
● Review –
For effectiveness and correctness all important documents across all
phases should be independently checked and reviewed .
● Testing –
The product should be tested against specification.
● Organizational Aspects –
Various organizational aspects should be addressed e.g., management
reporting of the quality team.
● ISO 9000 does not give any guideline for defining an appropriate
process and does not give guarantee for high quality process.
● ISO 9000 certification process have no international accreditation agency
exists.