Software Testing Unit 1 P-2
Software Testing Unit 1 P-2
Software quality is defined as a field of study and practice that describes the desirable attributes
of software products. There are two main approaches to software quality: defect management
and quality attributes.
A software defect can be regarded as any failure to address end-user requirements. Common
defects include missed or misunderstood requirements and errors in design, functional logic,
data relationships, process timing, validity checking, and coding errors.
The software defect management approach is based on counting and managing defects. Defects
are commonly categorized by severity, and the numbers in each category are used for planning.
More mature software development organizations use tools, such as defect leakage matrices
(for counting the numbers of defects that pass through development phases prior to detection)
and control charts, to measure and improve development process capability.
This approach to software quality is best exemplified by fixed quality models, such as ISO/IEC
25010:2011. This standard describes a hierarchy of eight quality characteristics, each
composed of sub-characteristics:
1. Functional suitability
2. Reliability
3. Operability
4. Performance efficiency
5. Security
6. Compatibility
7. Maintainability
8. Transferability
What is Quality Control?
This model classifies all software requirements into 11 software quality factors. The 11 factors
are grouped into three categories – product operation, product revision, and product transition
factors.
• Product operation factors − Correctness, Reliability, Efficiency, Integrity, Usability.
• Product revision factors − Maintainability, Flexibility, Testability.
• Product transition factors − Portability, Reusability, Interoperability.
According to McCall’s model, product operation category includes five software quality
factors, which deal with the requirements that directly affect the daily operation of the
software. They are as follows −
Correctness
These requirements deal with the correctness of the output of the software system. They
include −
• Output mission
• The required accuracy of output that can be negatively affected by inaccurate data or
inaccurate calculations.
• The completeness of the output information, which can be affected by incomplete data.
• The up-to-datedness of the information defined as the time between the event and the
response by the software system.
• The availability of the information.
• The standards for coding and documenting the software system.
Reliability
Reliability requirements deal with service failure. They determine the maximum allowed
failure rate of the software system, and can refer to the entire system or to one or more of its
separate functions.
Efficiency
It deals with the hardware resources needed to perform the different functions of the software
system. It includes processing capabilities (given in MHz), its storage capacity (given in MB
or GB) and the data communication capability (given in MBPS or GBPS).
It also deals with the time between recharging of the system’s portable units, such as,
information system units located in portable computers, or meteorological units placed
outdoors.
Integrity
This factor deals with the software system security, that is, to prevent access to unauthorized
persons, also to distinguish between the group of people to be given read as well as write
permit.
Usability
Usability requirements deal with the staff resources needed to train a new employee and to
operate the software system.
According to McCall’s model, three software quality factors are included in the product
revision category. These factors are as follows −
Maintainability
This factor considers the efforts that will be needed by users and maintenance personnel to
identify the reasons for software failures, to correct the failures, and to verify the success of
the corrections.
Flexibility
This factor deals with the capabilities and efforts required to support adaptive maintenance
activities of the software. These include adapting the current software to additional
circumstances and customers without changing the software. This factor’s requirements also
support perfective maintenance activities, such as changes and additions to the software in
order to improve its service and to adapt it to changes in the firm’s technical or commercial
environment.
Testability
Testability requirements deal with the testing of the software system as well as with its
operation. It includes predefined intermediate results, log files, and also the automatic
diagnostics performed by the software system prior to starting the system, to find out whether
all components of the system are in working order and to obtain a report about the detected
faults. Another type of these requirements deals with automatic diagnostic checks applied by
the maintenance technicians to detect the causes of software failures.
According to McCall’s model, three software quality factors are included in the product
transition category that deals with the adaptation of software to other environments and its
interaction with other software systems. These factors are as follows −
Portability
Portability requirements tend to the adaptation of a software system to other environments
consisting of different hardware, different operating systems, and so forth. The software
should be possible to continue using the same basic software in diverse situations.
Reusability
This factor deals with the use of software modules originally designed for one project in a
new software project currently being developed. They may also enable future projects to make
use of a given module or a group of modules of the currently developed software. The reuse
of software is expected to save development resources, shorten the development period, and
provide higher quality modules.
Interoperability
Interoperability requirements focus on creating interfaces with other software systems or with
other equipment firmware. For example, the firmware of the production machinery and testing
equipment interfaces with the production control software.
Once a program code is written, it must be tested to detect and subsequently handle all errors
in it. A number of schemes are used for testing purposes.
Another important aspect is the fitness of purpose of a program that ascertains whether the
program serves the purpose which it aims for. The fitness defines the software quality.
Web application testing, a software testing technique exclusively adopted to test the
applications that are hosted on web in which the application interfaces and other
functionalities are tested.
1. Functionality Testing - The below are some of the checks that are performed but not
limited to the below list:
• Verify there is no dead page or invalid redirects.
• First check all the validations on each field.
• Wrong inputs to perform negative testing.
• Verify the workflow of the system.
• Verify the data integrity.
2. Usability testing - To verify how the application is easy to use with.
• Test the navigation and controls.
• Content checking.
• Check for user intuition.
3. Interface testing - Performed to verify the interface and the dataflow from one system to
other.
4. Compatibility testing- Compatibility testing is performed based on the context of the
application.
• Browser compatibility
• Operating system compatibility
• Compatible to various devices like notebook, mobile, etc.
5. Performance testing - Performed to verify the server response time and throughput under
various load conditions.
• Load testing - It is the simplest form of testing conducted to understand the behaviour
of the system under a specific load. Load testing will result in measuring important
business critical transactions and load on the database, application server, etc. are also
monitored.
• Stress testing - It is performed to find the upper limit capacity of the system and also
to determine how the system performs if the current load goes well above the expected
maximum.
• Soak testing - Soak Testing also known as endurance testing, is performed to
determine the system parameters under continuous expected load. During soak tests
the parameters such as memory utilization is monitored to detect memory leaks or
other performance issues. The main aim is to discover the system's performance under
sustained use.
• Spike testing - Spike testing is performed by increasing the number of users suddenly
by a very large amount and measuring the performance of the system. The main aim
is to determine whether the system will be able to sustain the work load.
6. Security testing - Performed to verify if the application is secured on web as data theft and
unauthorized access are more common issues and below are some of the techniques to verify
the security level of the system.
• Injection
• Broken Authentication and Session Management
• Cross-Site Scripting (XSS)
• Insecure Direct Object References
• Security Misconfiguration
• Sensitive Data Exposure
• Missing Function Level Access Control
• Cross-Site Request Forgery (CSRF)
• Using Components with Known Vulnerabilities
• Unvalidated Redirects and Forwards
GUI testing is a testing technique in which the application's user interface is tested whether
the application performs as expected with respect to user interface behaviour.
GUI Testing includes the application behaviour towards keyboard and mouse movements and
how different GUI objects such as toolbars, buttons, menubars, dialog boxes, edit fields, lists,
behavior to the user input.
GUI Testing Guidelines
Before getting into details of software quality assurance you must know about software quality.
Software quality is an extent to which the developed software meets the requirements specified
by the customers. More the requirements are satisfied better is the quality of the software.
Software quality assurance is the set of actions performed by the SQA group to ensure the
quality of a software. Software quality assurance incorporates:
SQA encompasses a wide range of activities that can be performed to ensure the quality of the
software
1. Standards
We all know that organizations like IEEE, ISO etc. has lined-up a lot up software engineering
standards which should be imposed by the customer and even embraced by software engineers
while developing software.
SQA team must ensure that standards established by the authorized organization are followed
by the software engineers.
3. Testing
The elemental goal of software testing is to identify the bug in the software. SQA team has the
responsibility of planning the testing systematically and conducting it efficiently. This would
raise the possibility of finding the bug in the software if any.
4. Analyzing Error
Software testing reveals bugs and errors in the software which are analysed by the SQA team
in order to discover how the bugs or errors are introduced in the software and also discover the
possible methods required to eliminate those errors or bugs.
5. Change Management
The customer can ask for modifications between the development of the software. The changes
are the most distracting aspect of any software project.
If the implementation of the changes is not properly managed it will cause confusion which
will affect the quality of the software. The SQA team takes care that change management is
practised while developing the software.
6. Education
7. Vendor Management
It is the responsibility of SQA to suggest software vendors, to accept the quality practices while
developing the software. SQA must also incorporate these quality instructions in a contract
with software vendors.
8. Security Management
With the increase in cybercrime, the government has regulated the software organization to
incorporate policies to protect data at all level. It is the responsibility of SQA to verify whether
the software organization are using appropriate technology to acquire software security.
9. Safety
The hidden defects of any human-rated software (aircraft applications) can lead to catastrophic
events. So, it is the responsibility of SQA to evaluate the effect of software failure and introduce
steps to eliminate the risk.
Along with what SQA approach you are going to follow, what engineering activities will be
carried out, and it also includes ensuring that you have a right talent mix in your team.
Later, based on the information gathered, the software designer can prepare the project
estimation using techniques like WBS (work breakdown structure), SLOC (source line of
codes), and FP(functional point) estimation.
In this process, a meeting is conducted with the technical staff to discuss regarding the actual
quality requirements of the software and the design quality of the prototype. This activity helps
in detecting errors in the early phase of SDLC and reduces rework effort in the later phases.
This activity is a blend of two sub-activities which are explained below in detail:
(i) Product Evaluation:
This activity confirms that the software product is meeting the requirements that were
discovered in the project management plan. It ensures that the set standards for the project are
followed correctly.
(ii) Process Monitoring:
This activity verifies if the correct steps were taken during software development. This is done
by matching the actually taken steps against the documented steps.
By validating the change requests, evaluating the nature of change and controlling the change
effect, it is ensured that the software quality is maintained during the development and
maintenance phases.
After this, the QA team should determine the impact of the change which is brought by this
defect fix. They need to test not only if the change has fixed the defect, but also if the change
is compatible with the whole project.
For this purpose, we use software quality metrics which allows managers and developers to
observe the activities and proposed changes from the beginning till the end of SDLC and
initiate corrective action wherever required.
It also checks whatever reported by the team in the status reports were actually performed or
not. This activity also exposes any non-compliance issues.
We often hear that testers and developers often feel superior to each other. This should be
avoided as it can affect the overall project quality.
Statistical Quality Assurance (SQA)
Traditional compliance testing techniques can sometimes provide limited pass/fail information,
which results in insufficient measurements on the batch’s quality control, identification of the
root cause of failure results and overall quality assurance (QA) in the production process.
Intertek combines legal, customer and essential safety requirements to customize a workable
QA process, called Statistical Quality Assurance (SQA). SQA is used to identify the potential
variations in the manufacturing process and predict potential defects on a parts-per-million
(PPM) basis. It provides a statistical description of the final product and addresses quality and
safety issues that arise during manufacturing.
SQA consists of three major methodologies:
1. Force Diagram - A Force Diagram describes how a product should be tested. Intertek
engineers base the creation of Force Diagrams on our knowledge of foreseeable use,
critical manufacturing process and critical components that have high potential to fail.
2. Test-to-Failure (TTF) - Unlike any legal testing, TTF tells manufacturers on how
many defects they are likely to find in every million units of output. This information
is incorporated into the process and concludes if a product needs improvement in
quality or if it is being over engineered, which will eventually lead to cost savings.
3. Intervention - Products are separated into groups according to the total production
quantity and production lines. Each group then undergoes an intervention. The end
result is measured by Z-value, which is the indicator of quality and consistency of a
product to a specification. Intervention allows manufacturers to pinpoint a defect to a
specific lot and production line; thus saving time and money in corrective actions.
Software Reliability
Software reliability is also defined as the probability that a software system fulfils its assigned
task in a given environment for a predefined number of input cases, assuming that the hardware
and the input are free of error.
For example, large next-generation aircraft will have over 1 million source lines of software
on-board; next-generation air traffic control systems will contain between one and two million
lines; the upcoming International Space Station will have over two million lines on-board and
over 10 million lines of ground support software; several significant life-critical defence
systems will have over 5 million source lines of software. While the complexity of software is
inversely associated with software reliability, it is directly related to other vital factors in
software quality, especially functionality, capability, etc.
Quality Standards
CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of
organizational 'maturity' that determine effectiveness in delivering quality software. It is geared
to large organizations such as large U.S. Defense Department contractors. However, many of
the QA processes involved are appropriate to any organization, and if reasonably applied can
be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified
auditors.