0% found this document useful (0 votes)
6 views29 pages

Cpe 591 Lecture Note Engr Omosigho 2021-2022 Session Print

The document discusses faults in software engineering, defining them as errors that lead to incorrect results and outlining methods for identifying and resolving these faults, such as code reviews, testing, and debugging. It categorizes faults into types like algorithm, syntax, and hardware faults, and emphasizes the importance of fault management strategies like fault tolerance and avoidance to enhance software reliability. Additionally, it highlights the advantages and disadvantages of fault resolution, emphasizing the need for good programming practices to minimize faults and ensure system reliability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views29 pages

Cpe 591 Lecture Note Engr Omosigho 2021-2022 Session Print

The document discusses faults in software engineering, defining them as errors that lead to incorrect results and outlining methods for identifying and resolving these faults, such as code reviews, testing, and debugging. It categorizes faults into types like algorithm, syntax, and hardware faults, and emphasizes the importance of fault management strategies like fault tolerance and avoidance to enhance software reliability. Additionally, it highlights the advantages and disadvantages of fault resolution, emphasizing the need for good programming practices to minimize faults and ensure system reliability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

INTRODUCTION:

Introduction to Faults in Software Engineering


In software engineering, a fault is an error or defect in a program that causes it to produce incorrect or
unexpected results. Faults can occur at various stages of the software development process, from the
initial design to the final deployment. Common types of faults include coding errors, design flaws, and
requirements errors. The process of identifying and resolving faults is known as debugging or
troubleshooting. Preventing and detecting faults early in the development process can save time and
resources, and is an important aspect of software quality assurance.

There are several methods used to identify and resolve faults in software engineering, including:

1. Code reviews: A code review is a process in which other developers or team members review
the code written by a developer to identify potential errors or areas for improvement. This can
be done manually or with automated tools.
2. Testing: Testing is the process of evaluating a system or its component(s) with the intent to
find whether it satisfies the specified requirements or not. There are several types of testing,
such as unit testing, integration testing, and acceptance testing, which can help identify faults
in the software.
3. Debugging: Debugging is the process of identifying and resolving faults in the software by
analyzing the program’s source code, data, and execution. Debugging tools, such as
debuggers, can help developers identify the source of a fault and trace it through the code.
4. Monitoring: Monitoring is the ongoing process of tracking and analyzing the performance and
behavior of a system. Monitoring tools, such as log analyzers, can help identify and diagnose
faults in production systems.
5. Root cause analysis: Root cause analysis is a method used to identify the underlying cause of
a fault, rather than just addressing its symptoms. This can help prevent the same fault from
occurring in the future.
6. Preventing faults in software engineering is important to ensure that the software functions
correctly and meets the needs of its users. This can be achieved through good software design,
following best practices, and adhering to industry standards. Additionally, using version
control systems, keeping documentation and testing are also important to prevent faults in
software engineering.

Fault : It is an incorrect step in any process and data definition in computer program which is
responsible of the unintended behavior of any program in the computer. Faults or bugs in a hardware or
software may cause errors. An error can be defined as a part of the system which will lead to the failure
of the system. Basically an error in a program is an indication that failure occurs or has to occurred. If
there are multiple components of the system, errors in that system will lead to component failure. As
there are many component in the system that interact with each other, failure of one component might
be responsible to introduce one or more faults in the system. Following cycle show the behavior of the
fault.
Fault is cause of an error.
Figure: Fault Behavior

Types of fault : In software products, different types of fault can be occurred. In order to remove the
fault, we have to know what type of fault which is facing by our program. So the following are the types
of faults :

Figure: Types of Faults

1. Algorithm Fault : This type of fault occurs when the component algorithm or logic does not
provide the proper result for the given input due to wrong processing steps. It can be easily
removed by reading the program i.e. disk checking.
2. Computational Fault : This type of fault occur when a fault disk implementation is wrong or
not capable of calculating the desired result e.g. combining integer and floating point
variables may produce unexpected result.
3. Syntax Fault : This type of fault occur due the use of wrong syntax in the program. We have
to use the proper syntax for the programming language which we are using.
4. Documentation Fault : The documentation in the program tells what the program actually
does. Thus it can occur when program does not match with the documentation.
5. Overload Fault : For memory purpose we used data structures like array, queue and stack
etc. in our programs. When they are filled with their given capacity and we are using them
beyond their capacity, then overload fault occurs in our program.
6. Timing Fault : When the system is not responding after the failure occurs in the program
then this type of fault is referred as the timing fault.
7. Hardware Fault : This type of failure occur when the specified hardware for the given
software does not work properly. Basically, it is due to the problem in the continuation of the
hardware that is not specified in the specification.
8. Software Fault : It can occur when the specified software is not properly working or not
supporting the platform used or we can say operating system.
9. Omission Fault : It can occur when the key aspect is missing in the program e.g. when the
initialization of a variable is not done in the program.
10. Commission Fault : It can occur when the statement of expression is wrong i.e. integer is
initialized with float.
Classification of faults:

1. Transient: Fault occurs and then disappears by itself.


2. Intermittent: When fault occurs then vanishes on its own accord then reappears and so on.
3. Permanent: Fault occurs and does not vanish until it is fixed manually.
Fault Avoidance : Fault in the program can be avoid by using techniques and procedures which aims to
avoid the introduction of the fault during any phase of the safety lifecycle of the safety related
system. Fault Tolerance : It is ability of the functional unit to continue to perform a required function
even in the presence of the fault.

ADVANTAGES OR DISADVANTAGES:

Advantages of identifying and resolving faults in software engineering include:

1. Improved software quality: By identifying and resolving faults early in the development
process, software developers can improve the overall quality of the software and ensure that it
meets the needs of its users.
2. Reduced costs: Finding and fixing faults early in the development process can save time and
resources, and prevent costly rework or delays later in the project.
3. Enhanced customer satisfaction: Providing software that is free of faults can lead to increased
customer satisfaction and loyalty.
4. Reduced risk: By identifying and resolving faults early, developers can reduce the risk of
software failures and security vulnerabilities, which can have serious consequences.

However, there are also some disadvantages to identifying and resolving faults in software
engineering:

1. Increased development time: Finding and resolving faults can take additional time, which can
lead to delays in the project schedule and increased costs.
2. Additional resources needed: Identifying and resolving faults can require additional resources,
such as extra personnel or specialized tools, which can also increase costs.
3. Difficulty in identifying all faults: Identifying all faults in a software system can be difficult,
especially in large and complex systems. This can lead to missed faults and software failures.
4. Dependence on testing: Identifying faults largely depend on testing, testing may not be able to
reveal all faults in the software.
Overall, identifying and resolving faults in software engineering is an important aspect of software
quality assurance, and can lead to improved software quality, reduced costs, and enhanced customer
satisfaction. However, it is also important to be aware of the potential disadvantages and to manage the
process effectively to minimize these risks.
Reliability terminology
Term Description

Human error or
Human behavior that results in the introduction of faults into a system.
mistake

System fault A characteristic of a software system that can lead to a system error.

An erroneous system state that can lead to system behavior that is unexpected by
System error
system users.

An event that occurs at some point in time when the system does not deliver a
System failure
service as expected by its users.
Failures are a usually a result of system errors that are derived from faults in the system. However, faults
do not necessarily result in system errors if the erroneous system state is transient and can be 'corrected'
before an error arises. Errors do not necessarily lead to system failures if the error is corrected by built-in
error detection and recovery mechanism.
Fault management strategies to achieve reliability:
Fault avoidance
Development techniques are used that either minimize the possibility of mistakes or trap mistakes
before they result in the introduction of system faults.
Fault detection and removal
Verification and validation techniques that increase the probability of detecting and correcting
errors before the system goes into service are used.
Fault tolerance
Run-time techniques are used to ensure that system faults do not result in system errors and/or
that system errors do not lead to system failures.

FAULT TOLERANCE
In critical situations, software systems must be fault tolerant. Fault tolerance is required where there
are high availability requirements or where system failure costs are very high. Fault tolerance means
that the system can continue in operation in spite of software failure. Even if the system has been proved
to conform to its specification, it must also be fault tolerant as there may be specification errors or the
validation may be incorrect.
Fault-tolerant systems architectures are used in situations where fault tolerance is essential. These
architectures are generally all based on redundancy and diversity. Examples of situations where
dependable architectures are used:

• Flight control systems, where system failure could threaten the safety of passengers;
• Reactor systems where failure of a control system could lead to a chemical or nuclear
emergency;
• Telecommunication systems, where there is a need for 24/7 availability.

Protection system is a specialized system that is associated with some other control system, which can
take emergency action if a failure occurs, e.g. a system to stop a train if it passes a red light, or a system to
shut down a reactor if temperature/pressure are too high. Protection systems independently monitor the
controlled system and the environment. If a problem is detected, it issues commands to take emergency
action to shut down the system and avoid a catastrophe. Protection systems are redundant because they
include monitoring and control capabilities that replicate those in the control software. Protection systems
should be diverse and use different technology from the control software. They are simpler than the
control system so more effort can be expended in validation and dependability assurance. Aim is to ensure
that there is a low probability of failure on demand for the protection system.

Self-monitoring architecture is a multi-channel architectures where the system monitors its own
operations and takes action if inconsistencies are detected. The same computation is carried out on each
channel and the results are compared. If the results are identical and are produced at the same time, then it
is assumed that the system is operating correctly. If the results are different, then a failure is assumed and
a failure exception is raised. Hardware in each channel has to be diverse so that common mode hardware
failure will not lead to each channel producing the same results. Software in each channel must also be
diverse, otherwise the same software error would affect each channel. If high-availability is required, you
may use several self-checking systems in parallel. This is the approach used in the Airbus family of
aircraft for their flight control systems.
N-version programming involves multiple versions of a software system to carry out computations at the
same time. There should be an odd number of computers involved, typically 3. The results are compared
using a voting system and the majority result is taken to be the correct result. Approach derived from the
notion of triple-modular redundancy, as used in hardware systems.

Hardware fault tolerance depends on triple-modular redundancy (TMR). There are three replicated
identical components that receive the same input and whose outputs are compared. If one output is
different, it is ignored and component failure is assumed. Based on most faults resulting from component
failures rather than design faults and a low probability of simultaneous component failure.
PROGRAMMING FOR RELIABILITY
Good programming practices can be adopted that help reduce the incidence of program faults. These
programming practices support fault avoidance, detection, and tolerance.
Limit the visibility of information in a program
Program components should only be allowed access to data that they need for their
implementation. This means that accidental corruption of parts of the program state by these
components is impossible. You can control visibility by using abstract data types where the data
representation is private and you only allow access to the data through predefined operations such
as get () and put ().
Check all inputs for validity
All program take inputs from their environment and make assumptions about these inputs.
However, program specifications rarely define what to do if an input is not consistent with these
assumptions. Consequently, many programs behave unpredictably when presented with unusual
inputs and, sometimes, these are threats to the security of the system. Consequently, you should
always check inputs before processing against the assumptions made about these inputs.
Provide a handler for all exceptions
A program exception is an error or some unexpected event such as a power failure. Exception
handling constructs allow for such events to be handled without the need for continual status
checking to detect exceptions. Using normal control constructs to detect exceptions needs many
additional statements to be added to the program. This adds a significant overhead and is
potentially error-prone.
Minimize the use of error-prone constructs
Program faults are usually a consequence of human error because programmers lose track of the
relationships between the different parts of the system This is exacerbated by error-prone
constructs in programming languages that are inherently complex or that don't check for mistakes
when they could do so. Therefore, when programming, you should try to avoid or at least minimize
the use of these error-prone constructs.
Error-prone constructs:

• Unconditional branch (goto) statements


• Floating-point numbers (inherently imprecise, which may lead to invalid comparisons)
• Pointers
• Dynamic memory allocation
• Parallelism (can result in subtle timing errors because of unforeseen interaction between
parallel processes)
• Recursion (can cause memory overflow as the program stack fills up)
• Interrupts (can cause a critical operation to be terminated and make a program difficult
to understand)
• Inheritance (code is not localized, which may result in unexpected behavior when
changes are made and problems of understanding the code)
• Aliasing (using more than 1 name to refer to the same state variable)
• Unbounded arrays (may result in buffer overflow)
• Default input processing (if the default action is to transfer control elsewhere in the
program, incorrect or deliberately malicious input can then trigger a program failure)

Provide restart capabilities


For systems that involve long transactions or user interactions, you should always provide a restart
capability that allows the system to restart after failure without users having to redo everything that
they have done.
Check array bounds
In some programming languages, such as C, it is possible to address a memory location outside of
the range allowed for in an array declaration. This leads to the well-known 'bounded buffer'
vulnerability where attackers write executable code into memory by deliberately writing beyond
the top element in an array. If your language does not include bound checking, you should
therefore always check that an array access is within the bounds of the array.
Include timeouts when calling external components
In a distributed system, failure of a remote computer can be 'silent' so that programs expecting a
service from that computer may never receive that service or any indication that there has been a
failure. To avoid this, you should always include timeouts on all calls to external components.
After a defined time period has elapsed without a response, your system should then assume failure
and take whatever actions are required to recover from this.
Name all constants that represent real-world values
Always give constants that reflect real-world values (such as tax rates) names rather than using
their numeric values and always refer to them by name You are less likely to make mistakes and
type the wrong value when you are using a name rather than a value. This means that when these
'constants' change (for sure, they are not really constant), then you only have to make the change in
one place in your program.

SOFTWARE FAILURE MECHANISMS


The software failure can be classified as:
Transient failure: These failures only occur with specific inputs.
Permanent failure: This failure appears on all inputs.
Recoverable failure: System can recover without operator help.
Unrecoverable failure: System can recover with operator help only.
Non-corruption failure: Failure does not corrupt system state or data.
Corrupting failure: It damages the system state or data.

Software failures may be due to bugs, ambiguities, oversights or misinterpretation of the specification that
the software is supposed to satisfy, carelessness or incompetence in writing code, inadequate testing,
incorrect or unexpected usage of the software or other unforeseen problems.

Hardware vs. Software Reliability

Hardware Reliability Software Reliability

Hardware faults are mostly physical faults. Software faults are design faults, which are
tough to visualize, classify, detect, and
correct.

Hardware components generally fail due to Software component fails due to bugs.
wear and tear.

In hardware, design faults may also exist, In software, we can simply find a strict
but physical faults generally dominate. corresponding counterpart for
"manufacturing" as the hardware
manufacturing process, if the simple action of
uploading software modules into place does
not count. Therefore, the quality of the
software will not change once it is uploaded
into the storage and start running

Hardware exhibits the failure features Software reliability does not show the same
shown in the following figure: features similar as hardware. A possible curve
is shown in the following figure:

It is called the bathtub curve. Period A, B,


and C stand for burn-in phase, useful life
phase, and end-of-life phase respectively. If we projected software reliability on the
same axes.

There are two significant differences between hardware and software curves are:
One difference is that in the last stage, the software does not have an increasing failure rate as hardware
does. In this phase, the software is approaching obsolescence; there are no motivations for any upgrades or
changes to the software. Therefore, the failure rate will not change.

The second difference is that in the useful-life phase, the software will experience a radical increase in
failure rate each time an upgrade is made. The failure rate levels off gradually, partly because of the defects
create and fixed after the updates.

The upgrades in above figure signify feature upgrades, not upgrades for reliability. For feature upgrades,
the complexity of software is possible to be increased, since the functionality of the software is enhanced.
Even error fixes may be a reason for more software failures if the bug fix induces other defects into the
software. For reliability upgrades, it is likely to incur a drop in software failure rate, if the objective of the
upgrade is enhancing software reliability, such as a redesign or reimplementation of some modules using
better engineering approaches, such as clean-room method.

A partial list of the distinct features of software compared to hardware is listed below:

Failure cause: Software defects are primarily designed defects.


Wear-out: Software does not have an energy-related wear-out phase. Bugs can arise without warning.
Repairable system: Periodic restarts can help fix software queries.
Time dependency and life cycle: Software reliability is not a purpose of operational time.
Environmental factors: Do not affect Software reliability, except it may affect program inputs.
Reliability prediction: Software reliability cannot be predicted from any physical basis since it depends
entirely on human factors in design.
Redundancy: It cannot improve Software reliability if identical software elements are used.
Interfaces: Software interfaces are merely conceptual other than visual.
Failure rate motivators: It is generally not predictable from analyses of separate statements.
Built with standard components: Well-understood and extensively tested standard element will help
improve maintainability and reliability. But in the software industry, we have not observed this trend. Code
reuse has been around for some time but to a minimal extent. There are no standard elements for software,
except for some standardized logic structures.

SOFTWARE QUALITY ASSURANCE

What is Software Quality Assurance?


Software quality assurance (SQA) is a process that assures that all software engineering processes,
methods, activities, and work items are monitored and comply with the defined standards. These defined
standards could be one or a combination of any like ISO 9000, CMMI model, ISO15504, etc.
SQA incorporates all software development processes starting from defining requirements to coding until
release. Its prime goal is to ensure quality

What is Quality?

Quality defines to any measurable characteristics such as correctness, maintainability, portability,


testability, usability, reliability, efficiency, integrity, reusability, and interoperability.

There are two kinds of Quality:

Quality of Design: Quality of Design refers to the characteristics that designers specify for an item. The
grade of materials, tolerances, and performance specifications that all contribute to the quality of design.

Quality of conformance: Quality of conformance is the degree to which the design specifications are
followed during manufacturing. Greater the degree of conformance, the higher is the level of quality of
conformance.

Software Quality: Software Quality is defined as the conformance to explicitly state functional and
performance requirements, explicitly documented development standards, and inherent characteristics that
are expected of all professionally developed software.

Software quality product can also be defined in term of its fitness of purpose. That is, a quality product does
precisely what the users want it to do. For software products, the fitness of use is generally explained in
terms of satisfaction of the requirements laid down in the SRS document. Although "fitness of purpose" is
a satisfactory interpretation of quality for many devices such as a car, a table fan, a grinding machine, etc.for
software products, "fitness of purpose" is not a wholly satisfactory definition of quality.
Example: Consider a functionally correct software product. That is, it performs all tasks as specified in the
SRS document. But, has an almost unusable user interface. Even though it may be functionally right, we
cannot consider it to be a quality product.
The modern view of a quality associated with a software product several quality methods such as the
following:
Portability: A software device is said to be portable, if it can be freely made to work in various operating
system environments, in multiple machines, with other software products, etc.

Usability: A software product has better usability if various categories of users can easily invoke the
functions of the product.
Reusability: A software product has excellent reusability if different modules of the product can quickly
be reused to develop new products.
Correctness: A software product is correct if various requirements as specified in the SRS document have
been correctly implemented.
Maintainability: A software product is maintainable if bugs can be easily corrected as and when they show
up, new tasks can be easily added to the product, and the functionalities of the product can be easily
modified, etc.

Quality Control: Quality Control involves a series of inspections, reviews, and tests used throughout the
software process to ensure each work product meets the requirements place upon it. Quality control includes
a feedback loop to the process that created the work product.

Quality Assurance: Quality Assurance is the preventive set of activities that provide greater confidence
that the project will be completed successfully. Quality Assurance focuses on how the engineering and
management activity will be done?
As anyone is interested in the quality of the final product, it should be assured that we are building the right
product. It can be assured only when we do inspection & review of intermediate products, if there are any
bugs, then it is debugged. This quality can be enhanced.

Importance of Quality

We would expect the quality to be a concern of all producers of goods and services. However, the distinctive
characteristics of software and in particular its intangibility and complexity, make special demands.

Increasing criticality of software: The final customer or user is naturally concerned about the general
quality of software, especially its reliability. This is increasing in the case as organizations become more
dependent on their computer systems and software is used more and more in safety-critical areas. For
example, to control aircraft.

The intangibility of software: This makes it challenging to know that a particular task in a project has been
completed satisfactorily. The results of these tasks can be made tangible by demanding that the developers
produce 'deliverables' that can be examined for quality.

Accumulating errors during software development: As computer system development is made up of


several steps where the output from one level is input to the next, the errors in the earlier ?deliverables? will
be added to those in the later stages leading to accumulated determinable effects. In general the later in a
project that an error is found, the more expensive it will be to fix. In addition, because the number of errors
in the system is unknown, the debugging phases of a project are particularly challenging to control.
SQA Encompasses

o A quality management approach


o Effective Software engineering technology (methods and tools)
o Formal technical reviews that are tested throughout the software process
o A multitier testing strategy
o Control of software documentation and the changes made to it.
o A procedure to ensure compliances with software development standards
o Measuring and reporting mechanisms.

SQA ACTIVITIES

Software quality assurance is composed of a variety of functions associated with two different
constituencies ? the software engineers who do technical work and an SQA group that has responsibility
for quality assurance planning, record keeping, analysis, and reporting.

Following activities are performed by an independent SQA group:

1. Prepares an SQA plan for a project: The program is developed during project planning and is
reviewed by all stakeholders. The plan governs quality assurance activities performed by the
software engineering team and the SQA group. The plan identifies calculation to be performed,
audits and reviews to be performed, standards that apply to the project, techniques for error reporting
and tracking, documents to be produced by the SQA team, and amount of feedback provided to the
software project team.
2. Participates in the development of the project's software process description: The software team
selects a process for the work to be performed. The SQA group reviews the process description for
compliance with organizational policy, internal software standards, externally imposed standards
(e.g. ISO-9001), and other parts of the software project plan.
3. Reviews software engineering activities to verify compliance with the defined software
process: The SQA group identifies, reports, and tracks deviations from the process and verifies that
corrections have been made.
4. Audits designated software work products to verify compliance with those defined as a part of
the software process: The SQA group reviews selected work products, identifies, documents and
tracks deviations, verify that corrections have been made, and periodically reports the results of its
work to the project manager.
5. Ensures that deviations in software work and work products are documented and handled
according to a documented procedure: Deviations may be encountered in the project method,
process description, applicable standards, or technical work products.
6. Records any noncompliance and reports to senior management: Non- compliance items are
tracked until they are resolved.
This activity is a blend of two sub-activities which are explained below in detail:
(i) Product Evaluation:
This activity confirms that the software product is meeting the requirements that were discovered in the
project management plan. It ensures that the set standards for the project are followed correctly.

(ii) Process Monitoring:


This activity verifies if the correct steps were taken during software development. This is done by
matching the actually taken steps against the documented steps.

#7) Controlling Change:


In this activity, we use a mix of manual procedures and automated tools to have a mechanism for change
control.

By validating the change requests, evaluating the nature of change, and controlling the change effect, it is
ensured that the software quality is maintained during the development and maintenance phases.

#8) Measure Change Impact:


If any defect is reported by the QA team, then the concerned team fixes the defect.

After this, the QA team should determine the impact of the change which is brought by this defect fix.
They need to test not only if the change has fixed the defect, but also if the change is compatible with the
whole project.

For this purpose, we use software quality metrics that allow managers and developers to observe the
activities and proposed changes from the beginning till the end of SDLC and initiate corrective action
wherever required.

#9) Performing SQA Audits:


The SQA audit inspects the entire actual SDLC process followed by comparing it against the established
process.

It also checks whether whatever was reported by the team in the status reports was actually performed or
not. This activity also exposes any non-compliance issues.

#10) Maintaining Records and Reports:


It is crucial to keep the necessary documentation related to SQA and share the required SQA information
with the stakeholders. The test results, audit results, review reports, change requests documentation, etc.
should be kept for future reference.

#11) Manage Good Relations:


In fact, it is very important to maintain harmony between the QA and the development team.

We often hear that testers and developers often feel superior to each other. This should be avoided as it
can affect the overall project quality.
QUALITY ASSURANCE V/S QUALITY CONTROL

Quality Assurance Quality Control

Quality Assurance (QA) is the set of actions Quality Control (QC) is described as
including facilitation, training, measurement, the processes and methods used to
and analysis needed to provide adequate compare product quality to requirements
confidence that processes are established and and applicable standards, and the actions
continuously improved to produce products or are taken when a nonconformance is
services that conform to specifications and are detected.
fit for use.

QA is an activity that establishes and calculates QC is an activity that demonstrates


the processes that produce the product. If there whether or not the product produced met
is no process, there is no role for QA. standards.

QA helps establish process QC relates to a particular product or


service

QA sets up a measurement program to evaluate QC verified whether particular


processes attributes exist, or do not exist, in a
explicit product or service.

QA identifies weakness in processes and QC identifies defects for the primary


improves them goals of correcting errors.

Quality Assurance is a managerial tool. Quality Control is a corrective tool.

Verification is an example of QA. Validation is an example of QC.

Elements of Software Quality Assurance


There are 10 essential elements of SQA which are enlisted below for your reference:
1. Software engineering Standards
2. Technical reviews and audits
3. Software Testing for quality control
4. Error collection and analysis
5. Change management
6. Educational programs
7. Vendor management
8. Security management
9. Safety
10. Risk management
SQA Techniques
There are several techniques for SQA. Auditing is the chief technique that is widely adopted. However,
we have a few other significant techniques as well.

Various SQA Techniques include:


• Auditing: Auditing involves inspection of the work products and its related information to
determine if the set of standard processes were followed or not.
• Reviewing: A meeting in which the software product is examined by both the internal and
external stakeholders to seek their comments and approval.
• Code Inspection: It is the most formal kind of review that does static testing to find bugs
and avoid defect growth in the later stages. It is done by a trained mediator/peer and is based
on rules, checklist, entry and exit criteria. The reviewer should not be the author of the code.
• Design Inspection: Design inspection is done using a checklist that inspects the below areas
of software design:
• General requirements and design
• Functional and Interface specifications
• Conventions
• Requirement traceability
• Structures and interfaces
• Logic
• Performance
• Error handling and recovery
• Testability, extensibility
• Coupling and cohesion
• Simulation: A simulation is a tool that models a real-life situation in order to virtually
examine the behavior of the system under study.
• Functional Testing: It is a QA technique that verifies what the system does without
considering how it does it. This type of black box testing mainly focuses on testing the
system specifications or features.
• Standardization: Standardization plays a crucial role in quality assurance. It decreases the
ambiguity and guesswork, thus ensuring quality.
• Static Analysis: It is a software analysis that is done by an automated tool without actually
executing the program. This technique is highly used for quality assurance in medical,
nuclear, and aviation software. Software metrics and reverse engineering are some popular
forms of static analysis.
• Walkthroughs: A software walkthrough or code walkthrough is a kind of peer review
where the developer guides the members of the development team to go through the product
and raise queries, suggest alternatives, and make comments regarding possible errors,
standard violations, or any other issues.
• Path Testing: It is a white box testing technique where the complete branch coverage is
ensured by executing each independent path at least once.
• Stress Testing: This type of testing is done to check how robust a system is by testing it
under heavy load i.e. beyond normal conditions.
• Six Sigma: Six Sigma is a quality assurance approach that aims at nearly perfect products or
services. It is widely applied in many fields including software. The main objective of six
sigma is process improvement so that the produced software is 99.76 % defect-free.
Conclusion
SQA is an umbrella activity that is employed throughout the software lifecycle.

Software quality assurance is very important for your software product or service to succeed in the market
and survive up to the customer’s expectations.
There are various activities, standards, and techniques that you need to follow to assure that the
deliverable software is of high quality and aligns closely with the business needs.

PROJECT MONITORING AND CONTROL

Monitoring and Controlling are processes needed to track, review, and regulate the progress and
performance of the project. It also identifies any areas where changes to the project management method
are required and initiates the required changes.

The Monitoring & Controlling process group includes eleven processes, which are:

1. Monitor and control project work: The generic step under which all other monitoring and
controlling activities fall under.
2. Perform integrated change control: The functions involved in making changes to the project plan.
When changes to the schedule, cost, or any other area of the project management plan are necessary,
the program is changed and re-approved by the project sponsor.
3. Validate scope: The activities involved with gaining approval of the project's deliverables.
4. Control scope: Ensuring that the scope of the project does not change and that unauthorized
activities are not performed as part of the plan (scope creep).
5. Control schedule: The functions involved with ensuring the project work is performed according to
the schedule, and that project deadlines are met.
6. Control costs: The tasks involved with ensuring the project costs stay within the approved budget.
7. Control quality: Ensuring that the quality of the project?s deliverables is to the standard defined in
the project management plan.
8. Control communications: Providing for the communication needs of each project stakeholder.
9. Control Risks: Safeguarding the project from unexpected events that negatively impact the project's
budget, schedule, stakeholder needs, or any other project success criteria.
10. Control procurements: Ensuring the project's subcontractors and vendors meet the project goals.
11. Control stakeholder engagement: The tasks involved with ensuring that all of the project's
stakeholders are left satisfied with the project work.

SOFTWARE QUALITY ASSURANCE PLAN


Abbreviated as SQAP, the software quality assurance plan comprises the procedures, techniques, and
tools that are employed to make sure that a product or service aligns with the requirements defined in the
SRS(software requirement specification).

The plan identifies the SQA responsibilities of a team, and lists the areas that need to be reviewed and
audited. It also identifies the SQA work products.

The SQA plan document consists of the below sections:


1. Purpose section
2. Reference section
3. Software configuration management section
4. Problem reporting and corrective action section
5. Tools, technologies, and methodologies section
6. Code control section
7. Records: Collection, maintenance, and retention section
8. Testing methodology

Formal Technical Review (FTR) in Software Engineering

Software review:
A software review is an effective way of filtering errors in a software product. Reviews conducted at each
of these phases i.e., analysis, design, coding, and testing reveal areas of improvement in the product.
Reviews also indicate those areas that do not need any improvement. We can use software reviews to
achieve consistency and uniformity across products. Reviews also make the task of product creation more
manageable. Some of the most common software review techniques are:
i. Inspection
ii. Walkthrough
iii. Code review
iv. Formal Technical Reviews (FTR)
v. Pair programming

Formal Technical Review (FTR) is a software quality control activity performed by software
engineers.
Objectives of formal technical review (FTR): Some of these are:
• Useful to uncover error in logic, function and implementation for any representation of the
software.
• The purpose of FTR is to verify that the software meets specified requirements.
• To ensure that software is represented according to predefined standards.
• It helps to review the uniformity in software that is development in a uniform manner.
• To makes the project more manageable.
In addition, the purpose of FTR is to enable junior engineer to observer the analysis, design, coding and
testing approach more closely. FTR also works to promote back up and continuity become familiar with
parts of software they might not have seen otherwise. Actually, FTR is a class of reviews that include
walkthroughs, inspections, round robin reviews and other small group technical assessments of
software. Each FTR is conducted as meeting and is considered successful only if it is properly planned,
controlled and attended.
Example:

suppose during the development of the software without FTR design cost 10 units, coding cost 15 units
and testing cost 10 units then the total cost till now is 25 units without maintenance but there was a
quality issue because of bad design so to fix it we have to re design the software and final cost will
become 50 units. that is why FTR is so helpful while developing the software.
The review meeting: Each review meeting should be held considering the following
constraints- Involvement of people:
1. Between 3, 4 and 5 people should be involve in the review.
2. Advance preparation should occur but it should be very short that is at the most 2 hours of
work for every person.
3. The short duration of the review meeting should be less than two hour. Gives these
constraints, it should be clear that an FTR focuses on specific (and small) part of the overall
software.
At the end of the review, all attendees of FTR must decide what to do.

1. Accept the product without any modification.


2. Reject the project due to serious error (Once corrected, another app need to be reviewed), or
3. Accept the product provisional (minor errors are encountered and should be corrected, but no
additional review will be required).
The decision was made, with all FTR attendees completing a sign-of indicating their participation in the
review and their agreement with the findings of the review team.
Review reporting and record keeping :-
1. During the FTR, the reviewer actively records all issues that have been raised.
2. At the end of the meeting all these issues raised are consolidated and a review list is prepared.
3. Finally, a formal technical review summary report is prepared.
It answers three questions :-
1. What was reviewed ?
2. Who reviewed it ?
3. What were the findings and conclusions ?
Review guidelines :- Guidelines for the conducting of formal technical reviews should be established in
advance. These guidelines must be distributed to all reviewers, agreed upon, and then followed. A
review that is unregistered can often be worse than a review that does not minimum set of guidelines for
FTR.
1. Review the product, not the manufacture (producer).
2. Take written notes (record purpose)
3. Limit the number of participants and insists upon advance preparation.
4. Develop a checklist for each product that is likely to be reviewed.
5. Allocate resources and time schedule for FTRs in order to maintain time schedule.
6. Conduct meaningful training for all reviewers in order to make reviews effective.
7. Reviews earlier reviews which serve as the base for the current review being conducted.
8. Set an agenda and maintain it.
9. Separate the problem areas, but do not attempt to solve every problem notes.
10. Limit debate and rebuttal.

Different Phases of Formal Review

Formal Review generally takes place in piecemeal approach that consists of six different steps that are
essential. Formal review generally obeys formal process. It is also one of the most important and essential
techniques required in static testing.
Six steps are extremely essential as they allow team of developers simply to ensure and check software
quality, efficiency, and effectiveness. These steps are given below :

1. Planning :
For specific review, review process generally begins with ‘request for review’ simply by author
to moderator or inspection leader. Individual participants, according to their understanding of
document and role, simply identify and determine defects, questions, and comments. Moderator
also performs entry checks and even considers exit criteria.
2. Kick-Off :
Getting everybody on the same page regarding document under review is the main goal and aim
of this meeting. Even entry result and exit criteria are also discussed in this meeting. It is
basically an optional step. It also provides better understanding of team about relationship among
document under review and other documents. During kick-off, Distribution of document under
review, source documents, and all other related documentation can also be done.

3. Preparation :
In preparation phase, participants simply work individually on document under review with the
help of related documents, procedures, rules, and provided checklists. Spelling mistakes are also
recorded on document under review but not mentioned during meeting.

These reviewers generally identify and determine and also check for any defect, issue or error and offer
their comments, that later combined and recorded with the assistance of logging form, while reviewing
document.

4. Review Meeting :
This phase generally involves three different phases i.e. logging, discussion, and decision.
Different tasks are simply related to document under review is performed.

5. Rework :
Author basically improves document that is under review based on the defects that are detected
and improvements being suggested in review meeting. Document needs to be reworked if total
number of defects that are found are more than an unexpected level. Changes that are done to
document must be easy to determine during follow-up, therefore author needs to indicate changes
are made.

6. Follow-Up :
Generally, after rework, moderator must ensure that all satisfactory actions need to be taken on
all logged defects, improvement suggestions, and change requests. Moderator simply makes sure
that whether author has taken care of all defects or not. In order to control, handle, and optimize
review process, moderator collects number of measurements at every step of process. Examples
of measurements include total number of defects that are found, total number of defects that are
found per page, overall review effort, etc.

CMMI level

CMMI level: CMMI stands for Capability maturity model Integration. This model originated in
software engineering. It can be employed to direct process improvement throughout a project, department,
or entire organization.

What is CMMI and what's the advantage of implementing it in an organization?


It is a process improvement approach that provides companies with the essential elements of an effective
process. CMMI can serve as a good guide for process improvement across a project, organization, or
division. CMMI was formed by using multiple previous CMM processes.
The following are the areas which CMMI addresses:

Systems engineering: This covers development of total systems. System engineers concentrate on
converting customer needs to product solutions and supports them throughout the product lifecycle.

Software engineering: Software engineers concentrate on the application of systematic, disciplined, and
quantifiable approaches to the development, operation, and maintenance of software.

Integrated Product and Process Development (IPPD): Integrated Product and Process Development
(IPPD) is a systematic approach that achieves a timely collaboration of relevant stakeholders throughout
the life of the product to better satisfy customer needs, expectations, and requirements. This section
mostly concentrates on the integration part of the project for different processes. For instance, it's possible
that your project is using services of some other third party component. In such situations the integration
is a big task itself, and if approached in a systematic manner, can be handled with ease.

Software acquisition: Many times an organization has to acquire products from other organizations.
Acquisition is itself a big step for any organization and if not handled in a proper manner means a disaster
is sure to happen.

What's the difference between implementation and institutionalization?


Both of these concepts are important while implementing a process in any organization. Any new process
implemented has to go through these two phases.

Implementation: It is just performing a task within a process area. A task is performed according to a
process but actions performed to complete the process are not ingrained in the organization. That means
the process involved is done according to the individual point of view. When an organization starts to
implement any process it first starts at this phase, i.e., implementation, and then when this process looks
good it is raised to the organization level so that it can be implemented across organizations.
Institutionalization: Institutionalization is the output of implementing the process again and again. The
difference between implementation and institutionalization is in implementation if the person who
implemented the process leaves the company the process is not followed, but if the process is
institutionalized then even if the person leaves the organization, the process is still followed.

Can you explain the different maturity levels in a staged representation?

There are five maturity levels in a staged representation as shown in the following figure.

Maturity Level 1 (Initial): In this level everything is adhoc. Development is completely chaotic with
budget and schedules often exceeded. In this scenario we can never predict quality.

Maturity Level 2 (Managed): In the managed level basic project management is in place. But the basic
project management and practices are followed only in the project level.

Maturity Level 3 (Defined): To reach this level the organization should have already achieved level 2. In
the previous level the good practices and process were only done at the project level. But in this level all
these good practices and processes are brought to the organization level. There are set and standard
practices defined at the organization level which every project should follow. Maturity Level 3 moves
ahead with defining a strong, meaningful, organizational approach to developing products. An important
distinction between Maturity Levels 2 and 3 is that at Level 3, processes are described in more detail and
more rigorously than at Level 2 and are at an organization level.

Maturity Level 4 (Quantitatively measured): To start with, this level of organization should have
already achieved Level 2 and Level 3. In this level, more statistics come into the picture. Organization
controls the project by statistical and other quantitative techniques. Product quality, process performance,
and service quality are understood in statistical terms and are managed throughout the life of the
processes. Maturity Level 4 concentrates on using metrics to make decisions and to truly measure whether
progress is happening and the product is becoming better. The main difference between Levels 3 and 4 are
that at Level 3, processes are qualitatively predictable. At Level 4, processes are quantitatively
predictable. Level 4 addresses causes of process variation and takes corrective action.

Maturity Level 5 (Optimized): The organization has achieved goals of maturity levels 2, 3, and 4. In this
level, processes are continually improved based on an understanding of common causes of variation
within the processes. This is like the final level; everyone on the team is a productive member, defects are
minimized, and products are delivered on time and within the budget boundary.
The following figure shows, in detail, all the maturity levels in a pictorial fashion.

What are the different models in CMMI?

There are two models in CMMI. The first is "staged" in which the maturity level organizes the process
areas.

The second is "continuous" in which the capability level organizes the process area.

5 CMMI levels and their characteristics are described in the below image:

An organization is appraised and awarded a maturity level rating (1-5) based on the type of appraisal.
SOFTWARE QUALITY ASSURANCE STANDARDS

In general, SQA may demand conformance to one or more standards.

Some of the most popular standards are discussed below:


ISO 9000: This standard is based on seven quality management principles which help the organizations to
ensure that their products or services are aligned with the customer needs.
7 principles of ISO 9000 are depicted in the below image:

ISO 9000 Certification

ISO (International Standards Organization) is a group or consortium of 63 countries established to plan and
fosters standardization. ISO declared its 9000 series of standards in 1987. It serves as a reference for the
contract between independent parties. The ISO 9000 standard determines the guidelines for maintaining a
quality system. The ISO standard mainly addresses operational methods and organizational methods such
as responsibilities, reporting, etc. ISO 9000 defines a set of guidelines for the production process and is not
directly concerned about the product itself.

Types of ISO 9000 Quality Standards

The ISO 9000 series of standards is based on the assumption that if a proper stage is followed for production,
then good quality products are bound to follow automatically. The types of industries to which the various
ISO standards apply are as follows.

1. ISO 9001: This standard applies to the organizations engaged in design, development, production,
and servicing of goods. This is the standard that applies to most software development organizations.
2. ISO 9002: This standard applies to those organizations which do not design products but are only
involved in the production. Examples of these category industries contain steel and car
manufacturing industries that buy the product and plants designs from external sources and are
engaged in only manufacturing those products. Therefore, ISO 9002 does not apply to software
development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the installation and testing
of the products. For example, Gas companies.

How to get ISO 9000 Certification?

An organization determines to obtain ISO 9000 certification applies to ISO registrar office for registration.
The process consists of the following stages:

1. Application: Once an organization decided to go for ISO certification, it applies to the registrar for
registration.
2. Pre-Assessment: During this stage, the registrar makes a rough assessment of the organization.
3. Document review and Adequacy of Audit: During this stage, the registrar reviews the document
submitted by the organization and suggest an improvement.
4. Compliance Audit: During this stage, the registrar checks whether the organization has compiled
the suggestion made by it during the review or not.
5. Registration: The Registrar awards the ISO certification after the successful completion of all the
phases.
6. Continued Inspection: The registrar continued to monitor the organization time by time.

Reliability Models

A reliability growth model is a numerical model of software reliability, which predicts how software
reliability should improve over time as errors are discovered and repaired. These models help the manager
in deciding how much efforts should be devoted to testing. The objective of the project manager is to test
and debug the system until the required level of reliability is reached.
Following are the Software Reliability Models are:

Verification vs Validation in Software:

Software testing is a process of examining the functionality and behavior of the software through
verification and validation.

• Verification is a process of determining if the software is designed and developed as per the specified
requirements.
• Validation is the process of checking if the software (end product) has met the client’s true needs and
expectations.

Software testing is incomplete until it undergoes verification and validation processes. Verification and
validation are the main elements of software testing workflow because they:

1. Ensure that the end product meets the design requirements.


2. Reduce the chances of defects and product failure.
3. Ensures that the product meets the quality standards and expectations of all stakeholders involved.
Most people confuse verification and validation; some use them interchangeably. People often mistake
verification and validation because of a lack of knowledge on the purposes they fulfill and the pain points
they address.

The software testing industry is estimated to grow from $40 billion in 2020 to $60 billion in 2027.
Considering the steady growth of the software testing industry, we put together a guide that provides an
in-depth explanation behind verification and validation and the main differences between these two
processes.

Verification

As mentioned, verification is the process of determining if the software in question is designed and
developed according to specified requirements. Specifications act as inputs for the software development
process. The code for any software application is written based on the specifications document.
Verification is done to check if the software being developed has adhered to these specifications at every
stage of the development life cycle. The verification ensures that the code logic is in line with
specifications.

Depending on the complexity and scope of the software application, the software testing team uses
different methods of verification, including inspection, code reviews, technical reviews, and
walkthroughs. Software testing teams may also use mathematical models and calculations to make
predictive statements about the software and verify its code logic.

Further, verification checks if the software team is building the product right. Verification is a continuous
process that begins well in advance of validation processes and runs until the software application is
validated and released.

The main advantages of the verification are:

1. It acts as a quality gateway at every stage of the software development process.


2. It enables software teams to develop products that meet design specifications and customer needs.
3. It saves time by detecting the defects at the early stage of software development.
4. It reduces or eliminates defects that may arise at the later stage of the software development process.

A walkthrough of verification of a mobile application

There are three phases in the verification testing of a mobile application development:

1. Requirements Verification
2. Design Verification
3. Code Verification
Requirements verification is the process of verifying and confirming that the requirements are complete,
clear, and correct. Before the mobile application goes for design, the testing team verifies business
requirements or customer requirements for their correctness and completeness.

Design verification is a process of checking if the design of the software meets the design specifications
by providing evidence. Here, the testing team checks if layouts, prototypes, navigational charts,
architectural designs, and database logical models of the mobile application meet the functional and non-
functional requirements specifications.

Code verification is a process of checking the code for its completeness, correctness, and consistency.
Here, the testing team checks if construction artifacts such as source code, user interfaces, and database
physical model of the mobile application meet the design specification.

Validation

Validation is often conducted after the completion of the entire software development process. It checks if
the client gets the product they are expecting. Validation focuses only on the output; it does not concern
itself about the internal processes and technical intricacies of the development process.

Validation helps to determine if the software team has built the right product. Validation is a one-time
process that starts only after verifications are completed. Software teams often use a wide range of
validation methods, including White Box Testing (non-functional testing or structural/design testing)
and Black Box Testing (functional testing).

White Box Testing is a method that helps validate the software application using a predefined series of
inputs and data. Here, testers just compare the output values against the input values to verify if the
application is producing output as specified by the requirements.

There are three vital variables in the Black Box Testing method (input values, output values, and expected
output values). This method is used to verify if the actual output of the software meets the anticipated or
expected output.

The main advantages of validation processes are:

1. It ensures that the expectations of all stakeholders are fulfilled.


2. It enables software teams to take corrective action if there is a mismatch between the actual product
and the anticipated product.
3. It improves the reliability of the end-product.

A walkthrough of validation of a mobile application

Validation emphasizes checking the functionality, usability, and performance of the mobile application.

Functionality testing checks if the mobile application is working as expected. For instance, while testing
the functionality of a ticket-booking application, the testing team tries to validate it through:

1. Installing, running, and updating the application from distribution channels like Google Play and the
App Store
2. Booking tickets in the real-time environment (fields testing)
3. Interruptions testing
Usability testing checks if the application offers a convenient browsing experience. User interface and
navigations are validated based on various criteria which include satisfaction, efficiency, and
effectiveness.

Performance testing enables testers to validate the application by checking its reaction and speed under
the specific workload. Software testing teams often use techniques such as load testing, stress testing, and
volume testing to validate the performance of the mobile application.

Main differences between verification and validation

Verification and validation, while similar, are not the same. There are several notable differences between
these two. Here is a chart that identifies the differences between verification and validation:

Verification Validation

It is a process of checking if a
It is a process of ensuring that the product meets
Definition product is developed as per the
the needs and expectations of stakeholders.
specifications.
It tests the requirements,
What it tests or It tests the usability, functionalities, and reliability
architecture, design, and code of the
checks for of the end product.
software product.

Coding It does not require executing the It emphasizes executing the code to test the
requirement code. usability and functionality of the end product.

A few activities involved in The commonly-used validation activities in


Activities verification testing are requirements software testing are usability testing, performance
include verification, design verification, and testing, system testing, security testing, and
code verification. functionality testing.

Types of A few verification methods are A few widely-used validation methods are black
testing inspection, code review, desk- box testing, white box testing, integration testing,
methods checking, and walkthroughs. and acceptance testing.

Teams or The quality assurance (QA) team


The software testing team along with the QA team
persons would be engaged in the verification
would be engaged in the validation process.
involved process.

It targets internal aspects such as


It targets the end product that is ready to be
Target of test requirements, design, software
deployed.
architecture, database, and code.

Verification and validation are an integral part of software engineering. Without rigorous verification and
validation, a software team may not be able to build a product that meets the expectations of stakeholders.
Verification and validation help reduce the chances of product failure and improve the reliability of the
end product.

Different project management and software development methods use verification and validation in
different ways. For instance, both verification and validation happen simultaneously in agile development
methodology due to the need for continuous refinement of the system based on the end-user feedback.

You might also like