0% found this document useful (0 votes)
126 views

Software Quality Assurance and Testing

The document discusses software quality assurance and testing. It defines SQA and differentiates it from quality control and testing. It covers SQA processes, success factors, costs of quality, and components and causes of software defects. Ensuring quality is important for reducing costs, maintaining product quality, and satisfying customers.

Uploaded by

ashish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views

Software Quality Assurance and Testing

The document discusses software quality assurance and testing. It defines SQA and differentiates it from quality control and testing. It covers SQA processes, success factors, costs of quality, and components and causes of software defects. Ensuring quality is important for reducing costs, maintaining product quality, and satisfying customers.

Uploaded by

ashish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 68

Software Quality Assurance and Testing Page 1 of 68

Session 1
Module 1: Essential SQA: Processes and Success Factors
Contents
• Definition
• Quality Assurance Vs Quality Control
• Business Models
• Cost of Quality
• Quality culture
• Success Factors
Software – Definition
Development Life Cycle
Software life cycle is a process that contains the activities of requirements analysis, design,
coding, integration, testing, installation, and support for acceptance of software products. ISO
12207 [ISO 17]
Software Quality Definition
• Conformance to established software requirements; the capability of a software product
to satisfy stated and implied needs when used under specified conditions (ISO 25010
[ISO 11i]).
• The degree to which a software product meets established requirements; however, quality
depends upon the degree to which those established requirements accurately represent
stakeholder needs, wants, and expectations [(IEEE 730)] [IEE 14].
(Needs, Wants, Expectations can include - Functionalities, performance, efficiency, accurate
results, reliability, usability, costs, deadlines, etc.)
(True needs, Expressed needs, Specified needs, Achieved needs)
Software Quality Assurance
• The systematic application of scientific and technological knowledge, methods, and
experience for the design, implementation, testing, and documentation of software.
ISO 24765 [ISO 17a]
• A set of activities that define and assess the adequacy of software processes to
provide evidence that establishes confidence that the software processes are
appropriate for and produce software products of suitable quality for their intended
purposes. A key attribute of SQA is the objectivity of the SQA function with respect
to the project. The SQA function may also be organizationally independent of the
project; that is, free from technical, managerial, and financial pressures from the
project. [(IEEE 730)]

Software Quality Assurance and Testing Page 1 of 68


Software Quality Assurance and Testing Page 2 of 68
Quality Assurance Vs Quality Control
Quality Assurance (QA) refers to process management activities which are aimed at reducing
defects and errors for the end customer. QA involves looking at how the processes are performed
and making sure that the quality requirements are being fulfilled.
It is a proactive, process-based approach which aims to manage the quality of a product
before and during the production process.
Quality control (QC) is concerned with the end product and seeks to ensure that it is not
defective or damaged before reaching the customer.
QC is a reactive process which is employed after the product has been created to verify
its quality.

Software Quality Assurance and Testing Page 2 of 68


Software Quality Assurance and Testing Page 3 of 68
Quality Assurance Vs Testing
Software Quality Assurance – It is the process of ensuring that the software meets the set
standards of quality.
Software Testing – It is the process of identifying and verifying that software applications or
programs will meet the user's requirements.
Both processes are essential for delivering a high-quality product

Quality Assurance Testing

A way to prevent mistakes and defects in The process of executing a system with the
Definition
the manufactured products. intent of finding defects.

To improve development and test


Objective processes to prevent defects from arising To identify bugs and defects in the software.
when the product is being developed.
Focuses on improving and optimizing the Centers on verifying that the system meets
Focus processes involved in the software specified requirements and finding bugs and
development lifecycle. issues.
Covers the entire software development Limited to the testing phase of the software
Scope
process. development lifecycle.
Involves process management, setting
Activities Involves test case execution, bug reporting, and
quality objectives, and process
Involved ensuring that the software behaves as expected.
optimization.
Quality Assurance teams should ensure
Responsibilit Software testers are responsible for executing
that processes are effective and efficient
y test cases and reporting defects.
as per quality standards.
Ensures that the best practices and
Ensures that the software is bug-free and works
Outcome standards are followed throughout the
as expected.
software development lifecycle.
Utilizes tools for process monitoring, Utilizes tools designed for test case
Tools Used project management, and process management, bug tracking, and test
documentation. automation.
Ongoing throughout the software Performed after the software is developed and
Timing
development lifecycle. before it is released.

Software Quality Assurance and Testing Page 3 of 68


Software Quality Assurance and Testing Page 4 of 68
Why Quality Assurance is important
• Reduces Cost and Saves time
• Maintains Quality of the Product
• Ensures that product is Secure
• Improves accessibility and usability
• Improves Performance
• Increases customer satisfaction
• Protects company’s Reputation
Risks of neglecting Quality Assurance
• Software Development Risks
• Undetected code errors and Bugs
• System misbehaviors
• Poor system security
• Unstable performance
• Business Risk
• Missed deadlines
• Financial losses
• Reputational damage
• Unsatisfied customer
Software Components
Overall software quality can be achieved if quality of every component of software is achieved.
Thus it is important to know the components of Software –
• Programs – The instruction that have been translated into source code
• Procedures – User procedures and other processes
• Rules – Business Rules
• Documentation – Design Docs, Test Cases, User Manual, etc
• Data – Information that is inventoried, modeled, created.
Causes of Software Defect
Popular error-cause categories:
1. Problems with defining requirements
2. Maintaining effective communication between client and developer
3. Deviations from specifications
4. Architecture and design errors
5. Coding errors (including test code)
6. Non-compliance with current processes/procedures
7. Inadequate reviews and tests
8. Documentation errors
Caused by –
• Clients, Analysts, Designers, Software engineers, Testers, or Users.
1. Problems with defining requirements.

Software Quality Assurance and Testing Page 4 of 68


Software Quality Assurance and Testing Page 5 of 68
Good requirements are - Correct, complete, clear for each stakeholder group (e.g., the
client, the system architect, testers, etc), unambiguous, concise (simple, precise),
consistent, feasible (realistic, possible), necessary, independent of the design, independent
of the implementation, verifiable and testable, can be traced back to a business need,
unique.
2. Lack of effective communication between client and developer
Poor documentation, ineffective change management process, etc.
3. Deviations from specifications
Reuse of existing code, trimming of partial requirements, etc.
4. Architecture and design errors
Incomplete overview of S/W to be developed, incorrect business or technical process
sequence, poor design of business or process rule criteria.
5. Coding errors (including test code)
a. Inappropriate choice of programming language and conventions
b. Poor understanding/interpretation of design documents
c. Incoherent abstractions, Boundary condition errors.
6. Non-compliance with current processes/procedures
Non-compliance of Organizations internal methodology including processes, procedures,
steps, deliverables, templates, and standards (coding standard).
7. Inadequate reviews and tests
Design review, code review, Test plan and Test case review.
8. Documentation errors
Incomplete, obsolete, outdated documentation.
Software Delivery Issues
Issues faced by S/W developers (Organization) :
1. Incorrect results
2. Exceeding budget
3. Penalties for late delivery
4. Legal Proceedings - Not delivering what the client asked for.
5. Missing Market Opportunity
6. Bad Reviews in Press
7. High level of Support calls
8. Tarnishing the reputation of the organization.

Software Delivery Assumptions and Practices


Assumptions and facts for S/W Delivery -
• Delivery on schedule and within budget is crucial
• Reliable, correct software is crucial
• Requirements must be known and detailed from the project onset
• Projects are typically large scale with many communication channels
• It is necessary to show that what was promised has indeed been delivered
• Plans must be developed, and regular progress reports prepared (which are sent to project
management and the client).

Software Quality Assurance and Testing Page 5 of 68


Software Quality Assurance and Testing Page 6 of 68
Practices to be Followed -
• Lots of documentation
• Practices followed in Estimating and Management
• Waterfall development Cycle
• Project Audits
Business Models in Software
• Custom systems written on contract
• Accenture, TATA, and Infosys.
• Custom software written in-house : To improve organizational efficiency
• Internal IT organization.
• Commercial software: B2B
• e.g., Oracle and SAP.
• Mass-market software: B2C
• e.g., Microsoft and Adobe.
• Commercial and mass-market firmware: Embedded hardware and systems
• e.g. Digital cameras, automobile, braking system, and airplane engines.
Factors in each Business Model
1. Criticality
2. Uncertainty of users’ wants and needs
3. Range of environments
4. Cost of fixing errors
5. Regulations
6. Project size
7. Communication
a. Concurrent developer–developer communication:
b. Developer–maintainer communication
c. Communication between managers and developers
8. Organization’s culture –
a. Control culture,
b. Skill culture,
c. Collaborative culture,
d. Thriving culture
Open-Source Software: Distributed with its source code and the authorization to modify and
distribute it freely under the condition that it is also provided as open-source software once
modified.
Concerns
• Undemonstrated quality
• Lack of support
• Delays in providing fixes

Software Cost
Cost of Project can be grouped into following buckets
• Implementation costs

Software Quality Assurance and Testing Page 6 of 68


Software Quality Assurance and Testing Page 7 of 68
• Prevention Costs
• Appraisal Costs (Testing)
• Costs associated with failures or anomalies.
If implementation is 100% correct, the cost of the project is implementation cost and there is no
need to incur any cost for quality.

Software Quality Cost


Cost of Software Quality
1. Prevention Costs
a. Training to Employees in implementation, review, etc.
b. Better processes. Setting quality goals, standards, thresholds.
c. Cost of better tools and technology used for implementation
2. Appraisal Costs
a. Testing cost (Verification and validation)
b. Cost for Testing tools and technologies
3. Failures Cost
a. Internal – During development - Rework, Redesign.
b. External – At Client’s premises.
i. Replacement cost.
ii. Managing Disputes.
4. Warranty claims, Loss of reputation.
Costs ↓ / Quality → Low High
Prevention Costs Low High
Appraisal Costs Low High
Failures Cost High Low
Warranty claims, Loss of reputation High Low
Need to settle at Optimal cost and optimal quality

Software Quality Assurance and Testing Page 7 of 68


Software Quality Assurance and Testing Page 8 of 68
Cost of Fixing Defect

Cost of Propagating an Error

Cost of Software Quality

Software Quality Assurance and Testing Page 8 of 68


Software Quality Assurance and Testing Page 9 of 68

Relationship Between the Process Maturity Characteristic and Rework [KRA 98]
Quality Culture
Cultural Principles in Software Engineering
1. Do not compromise quality due to cost or deadlines. Quality is the number one priority.
2. Quality Work is appreciated.
3. Continuing education and learning.
4. Participation of the client in S/W development process.
5. Share the vision of the final product with the client.
6. Continuous improvement in your software development process.
7. Ensure that it is a peer, not a client, who finds the defect.
8. Repeatedly go through all development steps except coding. coding should only be done
once.
9. Controlling error reports and change requests is essential to quality and maintenance.
10. If you measure what you do, you can learn to do it better.
11. Not everything can be changed, Identify and work on changes that will reap the most
benefits
The Eight Principles of the IEEE’s Software Engineering Code of Ethics [IEE 99]

S.
Principle Description
No.
1 The public Software engineers shall act consistently with the public interest.

Client and Software engineers shall act in a manner that is in the best interests of their
2
Employer client and employer, consistent with the public interest.

Product Software engineers shall ensure that their products and related
3 Product
modifications meet the highest professional standards possible.

Software engineers shall maintain integrity and independence in their


4 Judgment
professional judgment.
Software engineering managers and leaders shall subscribe to and promote an
5 Management
ethical approach to the management of software development and

Software Quality Assurance and Testing Page 9 of 68


Software Quality Assurance and Testing Page 10 of 68
maintenance.
Software engineers shall advance the integrity and reputation of the
6 Profession
profession consistent with the public interest.

7 Colleagues Software engineers shall be fair to and supportive of their colleagues.

Software engineers shall participate in lifelong learning regarding the practice


8 Self of their profession and shall promote an ethical approach to the practice of the
profession.

Factors that Foster Software Quality


1. Good team spirit.
2. The skills of the members in the organization – Competency of Team
3. Managers who set a good example.
4. Effective communication between colleagues, managers, and the client.
5. Recognizing and valuing initiatives to improve quality.
6. Highlighting the notion of organizational culture of guaranteeing quality.
7. Including the notion of culture in an organization’s strategy.
8. Common goals and perception about quality between managers, software engineer and
other members of the organization.
9. Client involvement throughout the project. Early clarity of requirements and removal of
ambiguities.
10. Clearly defined roles and responsibilities.
Factors that adversely affect Software Quality
1. Give employees the responsibility but not the authority to take the actions necessary to
ensure the project’s success.
2. When we “shoot the messenger.”
3. When managers hide their heads in the sand rather than solve problems.
4. Lack of knowledge in quality assurance.
5. Unrealistic time frames.
6. A lack of common working methodology between team members.
7. A manager who says yes to everyone
8. Not taking quality requirements into account.
9. Not taking software criticality into account.
10. Making excuses to not be concerned with quality.
11. A lack of cohesion between SQA techniques and environmental factors in organization.
12. Confusing terminology used to describe software problems.
13. A lack of understanding or interest for collecting information on software error sources.
14. Poor understanding of software quality fundamentals.
15. Ignorance or non-adherence with published SQA techniques.
Session 2
Module 2: Quality Models and Management

Software Quality Assurance and Testing Page 10 of 68


Software Quality Assurance and Testing Page 11 of 68
Contents
• Software Quality Models
• McCall
• IEEE 1061
• ISO 25000 Series
• Quality Requirements
• Frameworks
• ISO/IEC/IEEE 12207 - Software Life Cycle Processes
• CMMI-Development
• ITIL Framework

Quality Perspectives
Five quality perspectives as described by Garvin (1984) [GAR 84]
1. Transcendental approach to quality: Although I can’t define quality, I know it when I
see it.
2. User-based approach: As expected from the user’s perspective. Can change with the
expectations of each user
3. Manufacturing-based approach: A “process based approach”. Compliance with the
process while defining the requirements and throughout the life cycle.
4. Product-based approach: Quality of internal properties of the software components, for
example, the quality of its architecture, quality of code, etc
5. Value-based approach: Focuses on the elimination of all activities that do not add value.

Software Quality Model


Quality Model
• A defined set of characteristics, and of relationships between them, which provides a
framework for specifying quality requirements and evaluating quality. ISO 25000
[ISO 14a]
A software quality model can be is used to
• Define software quality characteristics that can be evaluated;
• Define quality characteristics that will serve as the non-functional requirements
Set a measure and its objectives for each of the quality requirements

Software Quality Assurance and Testing Page 11 of 68


Software Quality Assurance and Testing Page 12 of 68
Quality Models – McCall [MCC 77]
• Proposed by McCall [MCC 77], developed in the 1970s for the United States Air Force
and was designed for use by software designers and engineers
• Uses Product based approach.
• More on Internal Quality
• Proposes three perspective for the user
• Each perspective into Quality Factor
• Quality Factor  Quality criteria (Measurable).

Software Quality Assurance and Testing Page 12 of 68


Software Quality Assurance and Testing Page 13 of 68
Internal Quality Vs External Quality
A change request
 External view –
o How soon it can be implemented, and whether the requirement is fulfilled of not.
 Internal View –
o Effort to identify the change – Location and size.
o Implementing the change
o Testing the change
o Documenting the change
o Releasing the updated version
Quality Models – IEEE 1061
• 1061-1998 - IEEE Standard for a Software Quality Metrics Methodology
• Presents examples of measures without formally prescribing specific measures.

Software Quality Assurance and Testing Page 13 of 68


Software Quality Assurance and Testing Page 14 of 68
IEEE 1061 Stake Holders
The model provides defined steps for –

• Software program acquisition. Measurements when adapting and releasing the


software.
• Software development – For designers and developers
• Quality assurance/quality control/audit – For outside team to evaluate
• Maintenance : While making changes or upgrades to software
• Client/user – Allows User to State Quality characteristics and evaluate during user
acceptance Test
IEEE 1061 – Steps to be followed
Steps to be followed
• List down non-functional quality requirements. Set a priority of these requirements.
• Meet every stakeholder.
• List down and resolve conflicts.
• Quantify each quality factor. Measure of threshold and quality.
• Have thresholds approved.
• Perform cost-benefit study – Take into account additional cost to enter information,
automate calculations, interpret and present the results, to modify support software, for
software assessment specialists, for specialized software to measure the software
application, Training required to apply the measurement plan.
• Have clear measurement method – Process of measurement, Data collection method,
areas of S/W which needs to be measured, etc
• Analyze Results – Analyze differences between measurements taken at different times
and different environments.
• Validate the measures - Identify measures that can predict the value of quality factors,
which are numeric representations of quality requirements.
o Model recommends techniques to Validate Measures.
IEEE 1061 Adoption
Adoption hesitancy
• Seen as being too expensive
• Some did not see its usefulness
• The industry was not ready to use it
• Suppliers did not want to be measured in this way by their clients.
Thus the adoption was not too high and it was not too popular outside of Military

Software Quality Assurance and Testing Page 14 of 68


Software Quality Assurance and Testing Page 15 of 68
ISO 25000 Series
Introduction
• The international standardization of a software quality model, the ISO 9126 standard
[ISO 01], was published for the first time in 1991
• Lots of terminology used from McCall [MCC 77] and IEEE 1061 [IEE 98b]
• International Organization for Standardization
• Replaced by the ISO/IEC 25000 standard [ISO 14a].
• The ISO 25000 [ISO 14a] standard allows for the evaluation of the quality of the final
software product as it
1. Assess the quality of the development process,
2. Assess the quality of the final product.
 The ISO 25000’s series of standards recommends the following four steps [ISO 14a]:
o Set quality requirements
o Establish a quality model
o Define quality measures
o Conduct evaluations

 ISO/IEC 25000 series of standards, is also known as SQuaRE (System and Software
Quality Requirements and Evaluation)
 A framework for the evaluation of software product quality

Software Quality Assurance and Testing Page 15 of 68


Software Quality Assurance and Testing Page 16 of 68
• ISO/IEC 2500n – Quality Management Division
• ISO/IEC 25000 - Guide to SQuaRE
• ISO/IEC 25001 - Planning and Management
• ISO/IEC 2501n – Quality Model Division
• ISO/IEC 25010 - System and software quality models
• ISO/IEC 25012 - Data Quality model
• ISO/IEC 2502n – Quality Measurement Division
• ISO/IEC 25020 - Measurement reference model and guide
• ISO/IEC 25021 - Quality measure elements
• ISO/IEC 25022 - Measurement of quality in use
• ISO/IEC 25023 - Measurement of system and software product quality
• ISO/IEC 25024 - Measurement of data quality
• ISO/IEC 2503n – Quality Requirements Division
• ISO/IEC 25030 - Quality requirements
• ISO/IEC 2504n – Quality Evaluation Division
• ISO/IEC 25040 - Evaluation reference model and guide
• ISO/IEC 25041 - Evaluation guide for developers, acquirers and independent
evaluators
• ISO/IEC 25042 - Evaluation modules
• ISO/IEC 25045 - Evaluation module for recoverability
ISO 25010

Software Quality Assurance and Testing Page 16 of 68


Software Quality Assurance and Testing Page 17 of 68
Requirement – Definition
Functional Requirement
• A requirement that specifies a function that a system or system component must be able
to perform. ISO 24765 [ISO 17a]
Non-Functional Requirement
• A software requirement that describes not what the software will do but how the software
will do it. Synonym: design constraint. ISO 24765 [ISO 17a]
Performance Requirement
• The measurable criterion that identifies a quality attribute of a function or how well a
functional requirement must be accomplished. A performance requirement is always an
attribute of a functional requirement. IEEE 730 [IEE 14]

Software Quality Requirements


In general, the requirements document form the basis of design and development of any product.
Steps for defining Requirements
• Gather : collect all wishes, expectations, and needs of the stakeholders;
• Prioritize : Prioritize based on relative importance of requirements (essential, desirable);
• Analyze : Check for consistency and completeness of requirements
• Describe : Can be easily understood by users and developers;
• Specify : Transform the business requirements into software specifications
Category of Requirements –
• Functional Requirements: Includes business and functional requirements for the user.
• Non-Functional (Quality) Requirements: quality characteristics and sub-characteristics
such as security, confidentiality, integrity, availability, performance, and accessibility.
• Constraints: Limitations such as infrastructure on which the system must run or the
programming language that must be used to implement the system.
Quality requirements – Non functional requirements.
Should be well documented with details such as -
• Quality characteristic, Quality sub-characteristic
• Measure (i.e., formula)
• Objectives (i.e., target)
• Importance - “Indispensable”, “Desirable” or “Non-applicable.”
Steps suggested for defining non-functional requirements –
Identify stakeholders  Develop Questionnaire  Conduct Interview  Consolidate
and prioritize results  Obtain consensus on quality factors
REQUIREMENT TRACEABILITY - Should be Traceable using documents such as –
• Specifications document
• Architecture and design document
• Code, and
• User Manuals

Software Quality Assurance and Testing Page 17 of 68


Software Quality Assurance and Testing Page 18 of 68
Quality of Requirements
Good requirements should have following characteristics
• Necessary : Prioritized list as per the objective of what needs to be achieved.
• Unambiguous: Clear enough to be interpreted in only one way.
• Concise: Precise, brief, and easy to read.
• Coherent: Must not contradict the other requirements. Consistent terminology
• Complete: Stated fully in one location and convey completeness, not requiring any other
references.
• Accessible: They must be realistic regarding their implementation in terms of available
finances, available resources, and within the time available.
• Verifiable: Should be verifiable whether they are met or not based on four possible
methods - inspection, analysis, demonstration, or tests.
• ISO 29148 or IEEE 830 is a complete section on Requirement engineering
o https://www.iso.org/obp/ui/#iso:std:iso-iec-ieee:29148:ed-2:v1:en
o https://standards.ieee.org/ieee/830/1222/

Software Quality Assurance and Testing Page 18 of 68


Software Quality Assurance and Testing Page 19 of 68
ISO/IEC/IEEE 12207 Software Life Cycle Processes
• Establishes a common framework for software life cycle processes.
Each process has –
• Title
• Purpose
• Outcome
• Activities
• Task

ISO/IEC/IEEE 12207
Quality Assurance Process in Detail
• Title – The Quality Assurance Process
• Purpose
o To help ensure the effective application of the organization’s quality management
process to the project.
o QA focuses on providing confidence that quality requirements will be fulfilled.
Proactive analysis of the project life cycle processes and outputs is performed to

Software Quality Assurance and Testing Page 19 of 68


Software Quality Assurance and Testing Page 20 of 68
assure that the product being produced will be of the desired quality and that
organization and project policies and procedures are followed.
• Outcome
o Project Quality Assurance procedures are defined and implemented.
o Criteria and methods for QA evaluations are defined.
o Evaluations of the project’s products, services, and processes are performed,
consistent with quality management policies, procedures, and requirements.
o Results of evaluations are provided to relevant stakeholders.
o Incidents are resolved.
o Prioritized problems are treated.
Activities & Task
• Prepare for Quality. Assurance
o Define a Quality Assurance strategy.
o Establish independence of QA from other life cycle processes.
• Perform product or service evaluations.
o Evaluate products and services for conformance to established criteria, contracts,
standards, and regulations.
o Monitor that verification and validation of the outputs of the life cycle processes
are performed to determine conformance to specified requirements.
• Perform process evaluations.
o Evaluate project life cycle processes for conformance.
o Evaluate tools and environments that support or automate the process for
conformance.
o Evaluate supplier processes for conformance to process requirements.
• Manage QA records and reports.
o Create records and reports related to QA activities.
o Maintain, store, and distribute records and reports.
o Identify incidents and problems associated with product, service, and process
evaluations.
• Treat incidents and problems. This activity consists of the following tasks:
o Record, analyze, and classify incidents.
o Identify selected incidents to associate with known errors or problems.
o Record, analyze and classify problems.
o Identify root causes and treatment of problems where feasible.
o Prioritize treatment of problems (problem resolution) and track corrective actions.
o Analyze trends in incidents and problems.
o Identify improvements in processes and products that may prevent future
incidents and problems.
o Inform designated stakeholders of the status of incidents and problems.
o Track incidents and problems to closure.

Software Quality Assurance and Testing Page 20 of 68


Software Quality Assurance and Testing Page 21 of 68
IEEE 730 for SQA Processes
Establishes the requirements for the planning and implementation of SQA activities for a
software project (https://www.yegor256.com/pdf/ieee-730-2014.pdf)
The IEEE 730 is structured as follows :
• Clause 1 - Describes the scope, purpose and an introduction.
• Clause 2 - Identifies normative references used by the IEEE 730.
• Clause 3 - Defines the terms, abbreviations and acronyms.
• Clause 4 - Describes the context for the SQA processes and SQA activities, and covers
expectations for how this standard will be applied.
• Clause 5 - Specifies the processes, activities and SQA tasks. Sixteen activities grouped
into three categories are described: implementation of SQA process, product assurance,
and process assurance.
• Twelve informative annexes A–L, where annex C provides guidelines for creating a
SQAP.

Software Quality Assurance and Testing Page 21 of 68


Software Quality Assurance and Testing Page 22 of 68
Session 3
Module 2: Quality Models and Management : CMMI, ITIL
Contents
• Frameworks
• CMMI-Development
• ITIL Framework
• Other Frameworks
• ISO/IEC 20000 Standard
• CobiT Process - Best practices for IT governance.
• ISO/IEC 27000 Family of Standards for Information Security
• ISO/IEC 29110 Standards and Guides for Very Small Entities
CMM
Capability Maturity Model (CMM®).
• Developed at the request of the American DoD, by the Software Engineering Institute
(SEI) at Carnegie Mellon University.
• Primarily focused on software engineering practices
• Provides a road map of engineering practices to improve the performance of the
development, maintenance and service provisioning processes.
Capability Maturity Model Integration (CMMI):
CMMI is an evolved version of CMM and is a framework that encompasses not only software
engineering but also various other domains, including systems engineering, project management,
and service delivery.
CMMI – Key Aspects
Key aspects of CMMI include:
1. Continuous and Staged Representations: CMMI supports two representations –
• Continuous - Allows to select and implement individual process areas based on
their specific needs.
• Staged - Requires organizations to follow a predefined sequence of maturity
levels.
2. Process Areas: Defines specific process areas at each maturity level
• Examples include Requirements Management, Project Planning, and
Configuration Management.
3. Appraisals: CMMI certification such as SCAMPI (Standard CMMI Appraisal
Method for Process Improvement).
• Process maturity and
• Adherence to CMMI practices.
4. Evolution: CMMI has evolved beyond its initial focus on software engineering to cover
various disciplines.
• CMMI for Development,
• CMMI for Services, and
• CMMI for Acquisition

Software Quality Assurance and Testing Page 22 of 68


Software Quality Assurance and Testing Page 23 of 68
CMMI – Appraisal
CMMI Institute authorizes and oversees organizations to provide official CMMI training and
appraisal services.
CMMI Institute is a subsidiary of ISACA - International professional association focused on
IT (information technology) governance or (Information Systems Audit and Control
Association)
Appraisal Sponsor Organizations (ASOs): Organizations authorized by the CMMI Institute.
They have certified lead appraisers and are responsible for ensuring the quality and integrity of
the appraisal process.
CMMI Institute Partner Organizations: These organizations collaborate with the CMMI
Institute to deliver official CMMI training and appraisal services.
Certified Lead Appraisers (CLAs): Individuals who have undergone CMMI Institute-approved
training and certification processes to become certified lead appraisers.

CMMI
The five maturity levels of CMM were:
Level 1 - Initial:
 Processes are ad-hoc and chaotic.
 Processes are unpredictable, poorly controlled, and reactive.
 There is no standardization, and success depends on individual heroics and efforts.
 Organizations at this level often experience high variation in process performance
 Frequent crisis management.
Level 2 - Repeatable: Basic project management processes are established.
 Processes are defined at the project level, and there is consistency in project execution.
 Processes are documented and communicated.
 Project performance is monitored, and corrective actions are taken.
Level 3 - Defined: Processes are well-defined, documented, and standardized across the
organization
 Standardized and documented processes at the organizational level.
 Emphasis on proactive management.
 Process metrics are collected and used for process improvement.
 Implement a culture of continuous improvement.
Level 4 - Managed: Detailed measurements of processes and their effectiveness are
collected.
 Quantitative objectives are set for process performance.
 Process performance is measured and controlled quantitatively.
 Variability in process performance is reduced.
 Identify and address the root causes of process variation.
Level 5 - Optimizing: Continuous process improvement is institutionalized.
 Focus on innovation, optimization, and process improvement at both the organizational
and project levels.
 Continuous improvement is part of the organizational culture.
 Process improvements are based on quantitative feedback.
 Best practices are identified and shared across the organization.

Software Quality Assurance and Testing Page 23 of 68


Software Quality Assurance and Testing Page 24 of 68
CMMI for Development

CMMI-Dev Model L-1


Maturity Level 1: Initial
 Processes are usually ad hoc and chaotic.
 Success in these organizations depends on the competence and heroics of the people
 Abandon their processes in a time of crisis, and be unable to repeat their successes.

CMMI-Dev Model L-2


Maturity Level 2: Managed
 Processes are planned and executed in accordance with policy.
 Able to produce controlled outputs. Practices are retained during times of stress.
 Process areas:
1. Requirements management: manage requirements of the project’s products and
product components.
2. Project planning: Establish and maintain plans that define project activities.
3. Project monitoring and control: Provide an understanding of the project’s
progress.
4. Supplier agreement management: Manage the acquisition of products and
services from suppliers.
5. Measurement and analysis: Develop and sustain a measurement capability used
to support management information needs.
6. Process and product quality assurance: Provide staff and management with
objective insight into processes and associated work products. Identify non-
compliance.
7. Configuration management: Maintaining Baselines and controlling changes.

Software Quality Assurance and Testing Page 24 of 68


Software Quality Assurance and Testing Page 25 of 68
CMMI-Dev Model L-3
Maturity Level 3: Defined
 Processes are well characterized and understood, and are described in standards,
procedures, tools, and methods.
 Processes are improved over time.
 Processes are used to establish consistency across the organization.
 Process areas include:
1. Requirements development: Elicit, analyze, and establish customer, product,
and product component requirements.
2. Technical solution: Select, design, and implement solutions to requirements.
3. Product integration: Assemble the product from the product components.
4. Verification: Ensure that selected work products meet their specified
requirements
5. Validation: Demonstrate that a product or product component fulfills its intended
use.
6. Organizational process focus: Plan, implement, and deploy organizational
process improvements.
7. Organizational process definition: Establish and maintain organizational
process, standards, rules and guidelines for teams.
8. Organizational training: Develop skills and knowledge of people.
9. Integrated project management: Manage Project, Include stakeholders.
10. Risk management: Identify potential problems before they occur.
11. Decision analysis and resolution: Analyze possible decisions using a formal
evaluation process.
CMMI-Dev Model L- 4 & 5
Maturity Level 4: Quantitatively managed
 The organization and projects establish quantitative objectives for quality and process
performance and use them as criteria in managing projects.
 Quality and process performance is understood in statistical terms and is managed
throughout the life of projects.
 Process areas include:
1. Organizational process performance - Establish and maintain a quantitative
understanding of the performance of selected processes.
2. Quantitative project management - Quantitatively manage the project to achieve
the project’s established quality and process performance objectives.
Maturity Level 5: Optimizing
 An organization continually improves its processes based on a quantitative understanding
of its business objectives and performance needs. Process areas -
1. Organizational performance management - Proactively manage the organization’s
performance to meet its business objectives.
2. Causal analysis and resolution - Identify causes of selected outcomes and take
action to improve process performance.

Software Quality Assurance and Testing Page 25 of 68


Software Quality Assurance and Testing Page 26 of 68

Software Quality Assurance and Testing Page 26 of 68


Software Quality Assurance and Testing Page 27 of 68
ITIL Framework
 ITIL stands for Information Technology Infrastructure Library
 A set of best practices for IT service management (ITSM)
 Created in Great Britain based on good management practices for computer services.
 For IT services - Backup copies, Recovery, Computer Administration,
Telecommunications, Production data, etc.
 Service operation processes focus more on longer term management than support
processes.
 The main objective is to ensure that the IT infrastructure meets the business requirements
of the organization.
ITIL – Key components
Key Components (Phases)
 Service Strategy
1. Focuses on understanding and aligning IT services with the business strategy.
2. Defines the overall vision, mission, and goals for IT services.
 Service Design
1. Design of new or changed services, including processes, architectures, and
documentation.
2. Ensures that IT services are designed to meet business requirements and
expectations.
 Service Transition
1. Focuses on transitioning new or changed services into the live environment.
2. Involves change management, release and deployment management, and
knowledge management.
 Service Operation
1. Concerned with the day-to-day operation of IT services.
2. Includes incident management, problem management, event management, request
fulfillment, access management.
 Continual Service Improvement (CSI)
1. A core principle throughout the ITIL lifecycle.
2. Involves measuring, monitoring, and improving IT services and processes over time.
ITIL Processes
ITIL describes the Service Support center function and the following five processes:
1. Incident management: Ensures all IT issues (“incidents”) are logged, and tracked to
resolution without being lost, ignored, or forgotten about.
2. Problem management: To reduce the likelihood and impact of incidents by identifying
actual and potential causes of incidents and managing workarounds and known errors.
3. Configuration management: Manage and control assets that make up an IT service.
4. Change management: To ensure that changes are recorded, evaluated, authorized,
prioritized, planned, tested, implemented, documented, and reviewed in a controlled
manner.
5. Commissioning/Release management: Plan, schedule, and control releases from end to
end.

Software Quality Assurance and Testing Page 27 of 68


Software Quality Assurance and Testing Page 28 of 68
ITIL describes the following five processes for Service Operation:
1. Service level management: Maintaining Service Catalogue. Agreement and maintaining
of Service Level Agreements
2. Financial management of IT services: Accounting, budgeting, and charging services so
that the organization covers costs and generates profits for those services.
3. Capacity management: Planning capacity (Software, Hardware, Human Resources) to
provide optimum and cost-effective provision of IT Services based on current and future
demand.
4. IT service continuity management: Defines and plans all measures and processes for
unpredicted events of disaster based on regular analysis of vulnerabilities, threats and
risks.
5. Availability management: Sustained availability of the IT infrastructure as per agreed
SLA.

ITIL History and Evolution


History and evolution
1. ITIL – 1980’s
2. ITIL V2 – 2000/2001
3. ITIL v3 – 2007
4. ITIL V4 – 2019
The ISO/IEC 20000 standard is the first ISO standard dedicated to managing IT services and is
based on ITIL Framework

ITIL Certifications
 ITIL Foundation:
– Entry-level certification providing an understanding of ITIL concepts and
terminology.
 ITIL Practitioner:
– Focuses on applying ITIL principles in a practical context.
 ITIL Intermediate:
– Offers two paths (Service Lifecycle and Service Capability) with multiple modules
covering specific areas of ITIL.
 ITIL Expert:
– Achieved by accumulating credits from both Foundation and Intermediate levels.
 ITIL Master:
– The highest level of ITIL certification, demonstrating the ability to apply ITIL
principles in real-world situations.

Software Quality Assurance and Testing Page 28 of 68


Software Quality Assurance and Testing Page 29 of 68
Other Standards
 ISO/IEC 20000 Standard
o The ISO/IEC 20000 standard is the first ISO standard dedicated to managing IT
services and is based on ITIL Framework
 CobiT Process (Control Objectives for Information and Related Technology)
o Repository of best practices for IT governance established by ISACA (IT
auditors)
o Harmonized with the ITIL reference, the PMBOK® Guide from the Project
Management Institute [PMI 13] as well as the ISO 27001 and ISO 27002
standards
 ISO/IEC 27000 Family of Standards for Information Security
o Preservation of confidentiality, integrity and availability of information
 ISO/IEC 29110 Standards and Guides for Very Small Entities
o For VSEs, namely enterprises, organizations, departments or projects with up to
25 people.

SQA – Standards and Resources


 ISO/IEC/IEEE 12207:2017
ISO/IEC –
 International Organization for Standards/International Electrotechnical
Commission
Software life cycle processes standards
https://www.iso.org/standard/63712.html
Earlier ISO/IEC 12207:2008

 IEEE 730-2014 (Institute of Electrical and Electronics Engineers (IEEE))


IEEE Standard for Software Quality Assurance Processes
https://standards.ieee.org/ieee/730/5284/

 CMMI Development (Capability Maturity Model Integration)


https://cmmiinstitute.com/cmmi/dev

 ISO Guide to the Software Engineering Body of Knowledge (SWEBOK)


https://www.computer.org/education/bodies-of-knowledge/software-engineering

Software Quality Assurance and Testing Page 29 of 68


Software Quality Assurance and Testing Page 30 of 68
Session 4
Module 3: Fundamentals of SQA: Software Quality Attributes
Contents
• Software Quality Factors
• Quality Attributes
1. Functional Suitability
2. Performance Efficiency
3. Compatibility
4. Usability
5. Reliability
6. Security
7. Maintainability
8. Portability
Quality Models – McCall [MCC 77]
• McCall [MCC 77] Quality Model
• Perspective  Into Quality Factor  Quality criteria (Measurable).

Software Quality Assurance and Testing Page 30 of 68


Software Quality Assurance and Testing Page 31 of 68
ISO 25010

1. Functional Suitability
Functional Suitability: Capability of a product to provide functions that meet stated and implied
needs of intended users when it is used under specified conditions.
 Functional completeness: Capability of a product to provide a set of functions that
covers all the specified tasks and intended users’ objectives.
• All functionality implemented as per requirements. No requirements should be
left out.
• To verify that each functionality is present in the system, every requirement
should be traceable through Testing.
• Data should not be missing. Example in reports, etc.
 Functional Correctness: Capability of a product to provide accurate results when used
by intended users.
• Inaccuracy in calculations. Rounding off errors
• Data should be up to date. No stale data. Duration of data should be accurate.
• Standards for coding and documenting the software system.
 Functional Appropriateness: Capability of a product to provide functions that facilitate
the accomplishment of specified tasks and objectives.
• A user is only presented with the necessary steps to complete a task, excluding
any unnecessary steps.

Software Quality Assurance and Testing Page 31 of 68


Software Quality Assurance and Testing Page 32 of 68
2. Performance Efficiency
Performance Efficiency: Capability of a product to perform its functions within specified time
and throughput parameters and be efficient in the use of resources under specified conditions.
– Resources can be CPU, memory, storage, and network devices.
– Resources can include other software products, the software and hardware
configuration of the system, energy, and materials (e.g. print paper, storage media).
 Time Behaviour: Capability of a product to perform its specified function under
specified conditions so that the response time and throughput rates meet the
requirements.
• Time it takes to perform an operation.
• Refresh rate of screen.
• Time taken for background operations.
 Resource Utilization: Capability of a product to use no more than the specified amount
of resources to perform its function under specified conditions.
• Memory usage
• Disk usage.
• Network Bandwidth usage.
• In a multi-user environment, one single application should not consume all
the resources
 Capacity: Capability of a product to meet requirements for the maximum limits of a
product parameter
• Parameters can include the number of items that can be stored, the number
of concurrent users, the communication bandwidth, the throughput of
transactions, and the size of a database.

3. Compatibility
Compatibility - Capability of a product to exchange information with other products, and/or to
perform its required functions while sharing the same common environment and resources.
 Co-existence - Capability of a product to perform its required functions efficiently while
sharing a common environment and resources with other products, without detrimental
impact on any other product.
• Multiple apps working simultaneously on the same operating system
• Sharing same Memory, Disk, etc
 Interoperability - Capability of a product to exchange information with other products
and mutually use the information that has been exchanged
• Information is meaningful data; and information exchange includes
transformation of data for exchange.
• There can be protocols for data conversion.

Software Quality Assurance and Testing Page 32 of 68


Software Quality Assurance and Testing Page 33 of 68
4. Usability
Usability - Degree to which a product or system can be used by specified users to achieve
specified goals with effectiveness, efficiency and satisfaction in a specified context of use.
 Appropriateness recognizability - Degree to which users can recognize whether a
product or system is appropriate for their needs.
• Time needed to understand the software functionality.
• Finding an Excel formula and understanding its usage for a particular task.
 Learnability - Degree to which a product or system can be used by specified users to
achieve specified goals of learning to use the product or system with effectiveness,
efficiency, freedom from risk and satisfaction in a specified context of use.
• Time needed to learn to use
• Similar operation under same menu.
 Operability - Attributes that make it easy to operate and control.
• Number of steps needed to achieve a certain functionality.
 User error protection - Protects users against making errors.
• Error messages and warnings for wrong usage (Warning message before deleting
any file)
• Completely stop users to perform certain functions which can send system into
unusable state. (Cannot delete system files - Only view them)
 User interface aesthetics - User interface enables pleasing and satisfying interaction for
the user.
• Configurable Color themes
• Rounded button Vs Square buttons Vs Square buttons with rounded corners
 Accessibility - Degree to which a product or system can be used by people with the
widest range of characteristics and capabilities to achieve a specified goal in a specified
context of use.
• System can be used by, “Specially Abled” people
• Big font size for old age people
• Voice conversion for written text.

Software Quality Assurance and Testing Page 33 of 68


Software Quality Assurance and Testing Page 34 of 68
5. Reliability
Reliability: Degree to which a system, product or component performs specified functions under
specified conditions for a specified period of time.
• Deals with failure to provide service.
• Failure rate could refer to a component or system as a whole.
 Maturity: Degree to which a system meets needs for reliability under normal operation.
• MS Teams meeting - Voice clarity under normal circumstance.
 Availability: Degree to which a system, product or component is operational and
accessible when required for use.
• Controlling downtime caused due to maintenance, breakdown, etc
 Fault tolerance: Degree to which a system, product, or component operates as intended
despite the presence of hardware or software Faults.
• RAID Configurations: Read and write across multiple disks. Mirroring data sets
across multiple disks, which makes RAID systems more fault tolerant by creating
built-in data redundancy.
 Recoverability: Degree to which, in the event of an interruption or a failure, a product or
system can recover the data directly affected and re-establish the desired state of the
system.
• Power goes off in the meeting and is back in a minute. Meeting is restarted
automatically. All the people do not have to join the meeting again.
• Slight disruption in Wi-Fi. Status of meeting does not changes

Software Quality Assurance and Testing Page 34 of 68


Software Quality Assurance and Testing Page 35 of 68
6. Security
Security - Degree to which a product or system protects information and data so that persons or
other products or systems have the degree of data access appropriate to their types and levels of
authorization.
 Confidentiality: Degree to which a product or system ensures that data are accessible
only to those authorized to have access.
• Only authorized users can access the data
• Login/Pwd for users.
• E.g. People not in this course cannot access any material related to this course
 Integrity: Degree to which a system, product or component prevents unauthorized access
to, or modification of, computer programs or data.
• Only authorized users do certain operations on the data.
• E.g. Only instructor can add/delete in Files section
 Non-repudiation - Degree to which actions or events can be proven to have taken place
so that the events or actions cannot be repudiated later.
• E.g. If files gets deleted in the system, it can be proven that the file did get
deleted.
• Versioning in files maintain all the changes happening in the file.
 Accountability - Degree to which the actions of an entity can be traced uniquely to the
entity.
• Logs and traces in the system to track as to who performed any action
 Authenticity - Degree to which the identity of a subject or resource can be proved to be
the one claimed.
• Can be done using certifications, encryption.
• No change can be done to document and also it can be proved as to who did it.

Software Quality Assurance and Testing Page 35 of 68


Software Quality Assurance and Testing Page 36 of 68
7. Maintainability
Maintainability: This characteristic represents the degree of effectiveness and efficiency with
which a product or system can be modified to improve it, correct it or adapt it to changes in
environment, and in requirements.
 Modularity: Degree to which a system or computer program is composed of discrete
components such that a change to one component has minimal impact on other
components.
• Easy to do change management.
• Object oriented programing.
 Reusability: Degree to which an asset can be used in more than one system, or in
building other assets.
• Libraries, DLLs
 Analyzability: Degree of effectiveness and efficiency with which it is possible to assess
the impact on a product or system of an intended change to one or more of its parts, or to
diagnose a product for deficiencies or causes of failures, or to identify parts to be
modified.
• Helps in bug fixing, new feature development
 Modifiability: Degree to which a product or system can be effectively and efficiently
modified without introducing defects or degrading existing product quality.
• Able to get confidence in quality with minimal regression testing
 Testability: Degree of effectiveness and efficiency with which test criteria can be
established for a system, product or component and tests can be performed to determine
whether those criteria have been met.
• Design test cases corresponding to requirements and able to test the system using
those Test Cases
8. Portability
Portability - Degree of effectiveness and efficiency with which a system, product or component
can be transferred from one hardware, software or other operational or usage environment to
another. This characteristic is composed of the following sub-characteristics:
 Adaptability: Degree to which a product or system can effectively and efficiently be
adapted for different or evolving hardware, software or other operational or usage
environments.
• New Driver for newer Printer, Audio device, etc
• Newer version of Apps corresponding to newer operating system
 Install-ability: Degree of effectiveness and efficiency with which a product or system
can be successfully installed and/or uninstalled in a specified environment.
• New software
• Patches, New Features
• Service Packs
• Newer versions.
 Replaceability: Degree to which a product can replace another specified software
product for the same purpose in the same environment.
• IE, Chrome, Firefox

Software Quality Assurance and Testing Page 36 of 68


Software Quality Assurance and Testing Page 37 of 68
• Mail clients
Session 5
Module 4: Deep driving SQA: Software Testing Techniques
Software Testing Fundamentals Verification and Validation
Contents
 Software Testing Fundamentals
 Software Verification and Validation
 Types of Testing
 Black Box Testing
 White Box Testing
 Test Levels and Types
 Unit Testing
 Integration Testing
 System Testing
Software Testing Fundamentals
Definition –
• Testing is the process of executing a program with intention of finding errors
[Myers]
• Testing is the process of exercising or evaluating a system or system component by
manual or automated means to verify that it satisfies specified requirements.
[IEEE 83a]
[IEEE Std 610.12 (IEEE, 1990)] -
• The process of operating a system or component under specified conditions, observing or
recording the results, and making an evaluation of some aspect of the system or
component.
• The process of analyzing a software item to detect the differences between existing and
required conditions (that is, bugs) and to evaluate the features of the software item.
Software testing is a formal process carried out by a specialized testing team in which a
software unit, several integrated software units or an entire software package are examined by
running the programs on a computer. All the associated tests are performed according to
approved test procedures on approved test cases.

Software Testing – Why


The Technical Case:
 Competent developers are not infallible.
 The implications of requirements are not always foreseeable.
 The behavior of a system is not necessarily predictable from its components.
 Languages, databases, user interfaces, and operating systems have bugs that can cause
application failures.
 Reusable classes and objects may have underlying issues.
The Business Case:

Software Quality Assurance and Testing Page 37 of 68


Software Quality Assurance and Testing Page 38 of 68
 If you don’t find bugs your customers or users will.
 Post-release debugging is the most expensive form of development.
 Buggy software hurts operations, sales, and reputation.
 Buggy software can be hazardous to life and property.
 Cost of fixing is higher if bugs are found later in the cycle.
Software Testing – Who
Who should be doing the Testing?
 Development Team.
 Product Management Team
 Project Management Team
 Dedicated Test Team.
 Leadership (Alpha Testing, Beta Testing)
 Customers
Effectively – It is Team Effort and ongoing process throughout lifecycle of Product Development
Software Testing – How much
How much Testing should be done.
 Studies show that software testing constitutes.
• About 40% of overall effort and
• 25% of the overall software budget
 Testing is never 100% exhaustive.
 Bug Free product is Myth
 Reach Acceptable Quality
 There is no standard way to measure testing process.
• Metrics can be computed at the organizational, process, project, and product
levels.
• Each set of measurements has its value in monitoring, planning, and control.
Testing is a trade-off between budget, time, and quality.
Software Testing – Error, Defect, Failure
Error: It is a human mistake which can be caused in any phase of SDLC (Requirements, Design,
Coding, Testing)
Defect: Once the error is caused, its representation or manifestation is called as Defect. Defects
are also called as Fault or Bug. A Defect is a variance between expected and actual Result.
Failure: Execution of Bug is called failure.

Error: Its main causes are :


 Negligence: Carelessness often leads to errors
 Miscommunication: An unclear feature specification or its improper interpretation.
 Inexperience: Inexperienced developers often miss out on essential details.

Software Quality Assurance and Testing Page 38 of 68


Software Quality Assurance and Testing Page 39 of 68
 Complexity: Intricate algorithms can cause mistakes in coding logic.
Defect: Its main causes are:
 Missing test cases
 Test cases not executed by the tester
 Improper execution of test cases
 Code changed after testing
Example – Transfer of funds from one account to another
 Error: Developer Codes transfer of funds without checking Minimum Balance
 Bug/Defect: Fund can be transferred even if there is no fund in the Account.
 Failure: Transfer of funds even without Minimum Balance
Software Testing – Test Case
Test Case – A test case is a set of actions performed on a system to determine if it satisfies
software requirements and functions correctly.
• Test Case ID
• Purpose
• Preconditions
• Inputs
• Expected Outputs
• Postconditions
• Execution History
• Date
• Result
• Version
• Run By
Software Testing – Terms and Definitions
 Test Suit – A collection of test scripts or test cases
 Test Script - The step-by-step instructions that describe how a test case is to be executed.
 Test ware: Includes all of testing documentation. For example, test specification, test scripts,
test cases, test data, the environment specification.
 Test log: A chronological record of all relevant details about the execution of a test.
 Test report: A document describing the conduct and results of testing carried out for a
system.
Principles of Testing
1. Testing time and resources are limited: Avoid redundant tests.
2. Exhaustive testing is impossible. Find a middle path. Make a decision.
3. Use effective resources to test : Use most suitable tools, procedures, and individuals to
conduct the tests.
4. Separate testing Team. Avoid programmers/Development team.
5. Begin “in small” and progress toward testing “in large.”
6. All customer requirements should have Test Cases and Test Scenarios.
7. Prepare test reports including test cases and test results to summarize the results of
testing.

Software Quality Assurance and Testing Page 39 of 68


Software Quality Assurance and Testing Page 40 of 68
8. Advance test planning is a must and should be updated in a timely manner. Start as early
as the requirement gathering phase.
Software Testing Verification Vs Validation
As per IEEE definition(s):
Software Verification:
• “It is the process of evaluating a system or component to determine whether the products
of a given development phase satisfy the conditions imposed at the start of that phase.”
OR
• “It is the process of evaluating, reviewing, inspecting and doing desk checks of work
products such as requirement specifications, design specifications and code.” OR
• “It is a human testing activity as it involves looking at the documents on paper.”
Software Validation:
• “It is defined as the process of evaluating a system or component during or at the end of
the development process to determine whether it satisfies the specified requirements. It
involves executing the actual software. It is a computer-based testing process.”

Software Testing QA Vs QC

Software Quality Assurance and Testing Page 40 of 68


Software Quality Assurance and Testing Page 41 of 68

Software Testing Verification and Validation (V&V) Techniques


Together Verification and Validation forms the complete Testing

Software Quality Assurance and Testing Page 41 of 68


Software Quality Assurance and Testing Page 42 of 68

Software Quality Assurance and Testing Page 42 of 68


Software Quality Assurance and Testing Page 43 of 68
Software Testing Verification and Validation (V&V) Planning
V&V Plan or simply Test Plan. IEEE83b documents the guidelines for the contents of system,
software, build, and module test plans
1. Identification of Goals
2. Selection of V&V Techniques –
• Requirement Phase: Technical Reviews, Prototyping, and Simulations
• Specifications Phase: Technical reviews, requirements tracing, prototyping, and
simulations
• Design Phase: Technical reviews, Requirements tracing, Prototyping, Simulation,
HLD Review, LLD Reviews.
• Implementation Phase: Technical reviews, requirements tracing, testing, and
proof of correctness.
• Maintenance Phase: As in Design Phase, Regression Testing
• Technical Reviews Include: Walk Throughs and Inspections.
3. Organizational Roles and Responsibilities: Dev, Test, SQA, Contractors.
4. Integrating V&V Approaches
5. Problem/Bug Tracking/Management
6. Test Execution Tracking: Test Runs, Resources Used, Time Taken, Bugs Found, etc
7. Asses: Quality of Product, Processes, Techniques.
Software Technical Reviews
Review Methodologies include –
1. Walkthroughs
2. Inspections
3. Audit – Review process by external authority.

Software Quality Assurance and Testing Page 43 of 68


Software Quality Assurance and Testing Page 44 of 68
Inspection
Inspection –
• Planning: The moderator plans the inspection. The inspection team is given relevant
materials, and after that, the team schedules the inspection meeting and works together.
• Overview meeting: Brief summary of the project and its code.
• Preparation: Each inspection team conducts inspection checklists.
• Inspection meeting: Inspectors point out flaws section by Section.
• Rework: Fixing of issues found.
• Follow-up: Meeting with the team to discuss the reviewed and fixed code.
STANDARD FOR SOFTWARE TEST DOCUMENTATION (IEEE829)
1. Test Plan
• Test-plan Identifier
• Introduction - Features to be tested, Reference to other documents
• Test Items
• Features to be Tested
• Approach
• Item Pass/Fail Criteria
• Suspension Criteria and Resumption
• Test Deliverables
• Testing Tasks
• Environmental Needs
• Responsibilities
• Staffing and Training Needs
• Schedule
• Risks and Contingencies16.
• Approvals: Specifies the persons who must approve this plan
2. Test Case Specification
• Test-case Specification Identifier
• Test Items
• Input Specifications
• Output Specifications
• Environmental Needs
3. Test-Incident Report (Bug Report)
• Bug-Report Identifier
• Summary
• Bug Description
• Impact: Priority, Severity (urgent, high, medium, low).
4. Test-Summary Report
5. Inspection Checklist for Test Plans
6. Inspection Checklist for Test Cases

Software Quality Assurance and Testing Page 44 of 68


Software Quality Assurance and Testing Page 45 of 68
Software Testing White-Box Testing and Black-Box
Software Test Classification
1. White box (structural) Testing: Examines internal calculation paths in order to identify
bugs. Also called as structural testing or clear-box or Glass-Box or Open-box testing.
IEEE definition
• Testing that takes into account the internal mechanism of a system or
component

2. Black box (functionality) Testing: Identifies bugs only according to software


malfunctioning as they are revealed in its erroneous outputs. Does not care about internal
path of calculations and processing performed.
IEEE definitions
• (1) Testing that ignores the internal mechanism of a system or component and
focuses solely on the outputs generated in response to selected inputs and
execution conditions.
• (2) Testing conducted to evaluate the compliance of a system or component with
specified functional requirements.
Software Testing Strategies
Software Testing Strategies
 Positive Testing: Test with legal Test Data to observe Normal behavior
 Negative Testing: Test with illegal or abnormal data and observe the behavior.
 Big Bang Testing (Entire System)
 Unit testing
 Integration Testing
 System Testing
 Top-Down ; Bottom-Up
 Stubs Testing (Unavailable Lower Level Module) (Top Down Methodology)
 Driver Testing (Unavailable Upper Level Module) (Bottom Up Methodology)
 Static Testing
o Code Reviews, Code Inspections, Walkthroughs, and Software Technical Reviews
(STRs)
Unit, Integration, System Testing
 Unit Testing
o Individual module is tested in isolation from the Rest of the Software.
 Integration Testing
o One or more modules are tested together
o Interactions between the modules is tested
o Top-Down, Bottom-Up, Sandwich (Bi-Directional Integration), Big-Bang.
 System Testing
o Testing on complete integrated product including H/W and S/W
o Includes functional and Non-functional tests.

Software Quality Assurance and Testing Page 45 of 68


Software Quality Assurance and Testing Page 46 of 68

Session 6
Module 5: Mastering SQA: Test Execution and Automated Testing
Black Box Testing Methodologies
Contents
 Test Design Techniques
 Combinatorial Testing
 Boundary Value Analysis
 Equivalence Class Partitioning
Need of Combinatorial Development Life Cycle
Possible Test case
210 = 1,024

Software Quality Assurance and Testing Page 46 of 68


Software Quality Assurance and Testing Page 47 of 68

Model the input


 Fault Model (Interaction Faults)
– A t-way interaction fault is a fault that is triggered by a certain combination of t
input values.
– A simple fault is a t-way fault where t = 1
– A pairwise fault is a t-way fault where t = 2.
– In practice, a majority of software faults consist of simple and pairwise faults.
 Unique combinations
– Latin Squares
– Pairwise Testing
– Orthogonal Array
Model the input
• P1 = (A,B)
• P2 = (1,2,3)
• P3 = (X, Y)
• Total Test Cases – 12
• Simple Fault - Each parameter value must be covered in at least
• one test case

Software Quality Assurance and Testing Page 47 of 68


Software Quality Assurance and Testing Page 48 of 68
Model the input
• P1 = (A,B)
• P2 = (1,2,3)
• P3 = (X, Y)
• Total Test Cases – 12
• 2-Way Fault - Every combination of values of two parameters are covered in at least one
test case.
• Also called as Pairwise coverage

Model the input


• P1 = (A,B)
• P2 = (1,2,3)
• P3 = (X, Y)
• Total Test Cases – 12
Base choice coverage –
• For each parameter, one of the possible values is designated as a base choice of the
parameter
• A base test is formed by using the base choice for each parameter
• Subsequent tests are chosen by holding all base choices constant, except for one, which is
replaced using a non-base choice of the corresponding parameter:

Software Quality Assurance and Testing Page 48 of 68


Software Quality Assurance and Testing Page 49 of 68
3-Way coverage

3-Way coverage

Need of Combinatorial
OS - XP, OS X, RHL (3)
Browser - IE, Firefox (2)
Protocol - IPV4, IPV6 (2)
CPU - Intel, AMD (2)
DBMS - Sybase, Oracle, MySQL (3)
Total Possible Test case 3*2*2*2*3 = 72

Software Quality Assurance and Testing Page 49 of 68


Software Quality Assurance and Testing Page 50 of 68
Pairwise Testing
Pairwise Testing is a type of software testing in which permutation and combination
method is used to test the software. Pairwise testing is used to test all the possible discrete
combinations of the parameters involved.
Pairwise testing is a P&C based method, in which to test a system or an application, for
each pair of input parameters of a system, all possible discrete combinations of the parameters
are tested. By using the conventional or exhaustive testing approach it may be hard to test the
system but by using the permutation and combination method it can be easily done.

Pairwise Testing (Example)

Pairwise Testing (Example)

Software Quality Assurance and Testing Page 50 of 68


Software Quality Assurance and Testing Page 51 of 68
Pairwise Testing

Advantages of Pairwise Testing


 Reduces the number of execution of test cases.
 Increases the test coverage almost up to hundred percentage.
 Increases the defect detection ratio.
 Takes less time to complete the execution of the test suite
 Reduces the overall testing budget for a project.

Disadvantages of Pairwise Testing


 Not beneficial if the values of the variables are inappropriate.
 It is possible to miss the highly probable combination while selecting the test data.
 Defect yield ratio may be reduced if a combination is missed.
 Not useful if combinations of variables are not understood correctly.

Coverage
• Studies such as “Estimating t-Way Fault Profile Evolution During Testing” and “Practical Combinatorial
Testing” (presented by the National Institute of Standards and Technology in 2017 and 2010 respectively)
indicate that the vast majority of defects (67%-93%) related to input values are due to a problem in either
one parameter value (single-value fault) or a combination of two parameter values (2-way interaction fault).

• NIST research showed that most software bugs and failures are caused by one or two parameters, with
progressively fewer by three or more, which means that combinatorial testing can provide more efficient
fault detection than conventional methods. Multiple studies have shown fault detection equal to exhaustive
testing with a 20X to 700X reduction in test set size. New algorithms compressing combinations into a
small number of tests have made this method practical for industrial use, providing better testing at lower
cost. (NIST).

Software Quality Assurance and Testing Page 51 of 68


Software Quality Assurance and Testing Page 52 of 68
Black Box Testing: BVA (Boundary Value Analysis)
 Boundary Value Analysis
 Software is treated as a Black Box.
 Combination of Input is decided.
o Actual o/p is matched with expected o/p
 Test case with {min, min+, nom, max–, max} values

Boundary Value Analysis (BVA) – Explore the types


Example
A program takes 2 inputs x1 and x2
 a <= x1 <= b
 c <= x2 <= d
We have the intervals
 [a, b]  x1
 [c, d]  x2

 BVA test cases for a function of two variables – single fault assumption
 4n+1 test cases

Boundary Value Analysis (BVA) – Worst Case Analysis

Software Quality Assurance and Testing Page 52 of 68


Software Quality Assurance and Testing Page 53 of 68
Black Box Testing - BVA (Boundary Value Analysis)

Black Box Testing - BVA (Boundary Value Analysis)

Equivalence Class Testing


 The input and the output domain is divided into a finite number of equivalence classes.
 Eg. select one representative of each class and test our program against it.
 Four types of equivalence class testing are discussed:
1. Weak normal equivalence class testing – All Valid inputs
2. Strong normal equivalence class testing - All Valid inputs, one from each sub-interval
equivalence class
3. Weak robust equivalence class testing – One invalid input
4. Strong robust equivalence class testing – More than one invalid input

Software Quality Assurance and Testing Page 53 of 68


Software Quality Assurance and Testing Page 54 of 68
EC – Weak Normal

EC – Strong Normal

EC – Weak Robust

EC – Strong Robust

Software Quality Assurance and Testing Page 54 of 68


Software Quality Assurance and Testing Page 55 of 68
Black Box Testing – Equivalence Class Testing

Black Box Testing – Equivalence Class Testing

Software Quality Assurance and Testing Page 55 of 68


Software Quality Assurance and Testing Page 56 of 68
Black Box Testing – Equivalence Class Testing

Equivalence Class (EC) & Boundary Value Analysis (BVA)

Software Quality Assurance and Testing Page 56 of 68


Software Quality Assurance and Testing Page 57 of 68
Session 7
Module 5: Mastering SQA: Test Execution and Automated Testing
Contents
 Test Methodology – Decision Table Testing
 Test Execution Process
 Test Case Design
 Automated Testing
 Alpha and Beta Site Testing Programs
 Regression Testing Strategies

Decision Tables

Black Box Testing Decision Table Based Testing


• A column in the entry portion is a rule.
• Rules indicate which actions are taken for the conditional circumstances indicated in the
condition portion of the rule.
• Decision tables in which all conditions are binary are called limited entry decision tables.
• If conditions are allowed to have several values, the resulting tables are called extended
entry decision tables.
To identify test cases with decision tables, we follow certain steps:
Step 1. For a module identify input conditions (causes) and action (effect).
Step 2. Develop a cause-effect graph.
Step 3. Transform this cause-effect graph, so obtained in step 2 to a decision table.
Step 4. Convert decision table rules to test cases. Each column of the decision table represents a
test case. That is,
Number of Test Cases = Number of Rules
n conditions exist, there must be 2n rules.

Software Quality Assurance and Testing Page 57 of 68


Software Quality Assurance and Testing Page 58 of 68
DT - Example
Component Specification: Input of 2 characters such that
1. The 1st character must be A or B.
2. The 2nd character must be a digit.
3. If the 1st character is A or B and the 2nd character is a digit the file is updated.
4. If the 1st character is incorrect, message X12 is displayed.
5. If the 2nd character is not a digit, message X13 is displayed.
Develop test cases using Decision table technique

DT Example Solution

Black Box Testing Decision Table Based Testing


Next Date is a function of three variables: month, date, and year. It returns the date of next day as
output. It reads current date as input date.
M1 = {month: month has 30 days}
M2 = {month: month has 31 days except December}
M3 = {month: month is December}
M4 = {month: month is February}
D1 = {day: 1 ≤ day ≤ 27}
D2 = {day: day = 28}
D3 = {day: day = 29}
D4 = {day: day = 30}
D5 = {day: day = 31}
Y1 = {year: year is a leap year}
Y2 = {year is a common year}

Software Quality Assurance and Testing Page 58 of 68


Software Quality Assurance and Testing Page 59 of 68

Software Quality Assurance and Testing Page 59 of 68


Software Quality Assurance and Testing Page 60 of 68
Decision Table – Triangle problem

Software Quality Assurance and Testing Page 60 of 68


Software Quality Assurance and Testing Page 61 of 68
Software Test Execution
Goal:
1. Design Test Cases which cover vast area of Software thoroughly and accurately
2. Less number of Test Cases
3. Improved Quality
Testing Phases:
1. Determine Test Methodology.
2. Planning Tests
3. Designing Tests
4. Performing the Tests

Software Test Strategy


Decision to be made in Test Strategy -
1. Approach – Top-Down, Bottom-Up, Sandwich, Big-Bang
2. Which parts should be tested with White-Box Model
3. Which parts should be tested with Black-Box Model
4. How much and which area to be covered by Automated Tests
Planning Tests
1. Unit Tests
2. Integrations Tests
3. System Tests

 What to Test.
• Avoid duplication in case of reused software.
• Criticality of the area.
• Newer versions of stable software. Added modules/feature.
 Who performs the Tests
• Unit Test – Development Team
• System Tests – Independent Test Team (Internal or External).
 Where to Perform Tests
• Unit Test or Integrations Tests – Development Site
• System tests – Development Site or Consultant Site or Customer Site.
 How much to Tests
• When all the Tests in Test plan are executed and no issues are found.
• Achieve certain minimum level of Error discovery rate.
• Resources have ended – Time Limit, Budget.

Software Quality Assurance and Testing Page 61 of 68


Software Quality Assurance and Testing Page 62 of 68
Software Test Design
Test Design
 Scope of the Test
• Software Package, module to be Tested.
 Testing Environment
• Testing Site
• Hardware and firmware configuration
 Test Details
• Test Case ID
• Objective
• Design Doc reference or requirement doc, if any
• Test level – Unit, integration, System
• Pre-requisites
• Data to be recorded.
• Steps for execution

Test Prioritization
 Test cases can be prioritized.
 Priority setting can be based on
• Type of product
• Organization size/culture
• Resources available
 Sample Prioritization
• P0 – Tests that need to pass before full testing is started
• P1 - This test must be executed before final delivery.
• P2 - If time permits, execute this test.
• P3 - This test can wait after the delivery date.

Software Test Execution


• Test execution Process
• Execute Test cases  Find Defects  Fix Defects  Re-Test
• Re-Test is called as regression Testing
• To find if the Defect fixed is properly fixed
• To find if no new Defects is created while fixing the old Defect
• Regression Testing can be execution of full Test suit or its subset.
• Results of every Test Run are recorded in Software Test Report (STR)

Software Test Report


• Test Identification
• Tested software Identification – Version, Build, etc
• Initiation Time, Concluding time, etc
• Test Site
• Test Environment – H/W, firmware configuration
• Test Results

Software Quality Assurance and Testing Page 62 of 68


Software Quality Assurance and Testing Page 63 of 68
• Test case ID
• Test result – Pass/Failed
• Summary Results
• Total Test case executed.
• Total Pass/Fail
• Comparison with previous results
Automated Testing
 Why Automated Testing
• Cost savings
• Shortened test duration
• Thoroughness
• Test accuracy
• Automated result reporting
• Statistical processing and subsequent reporting of Results
 Process of Automated Testing
• Just like Manual Testing, it requires –
• Test planning, Test design, Test case preparation
• Test Execution, Regression testing
• Final Test log and report preparation including comparison reports
Types of Automated Testing
Types of Automated Test
 Code Auditing
• Check compliance with respect to coding standards.
• Code complexity metrics (McCabe’s cyclomatic complexity)
• Levels of loop nesting
• Levels of subroutine nesting
• Prohibited constructs, such as GOTO
• Check for Naming conventions for variables, files, etc.
• Unreachable code lines of program or entire subroutines.
• Format and size of comments
 Coverage monitoring
• Line coverage achieved when implementing a given test case file.
• Path Coverage.
• Achieved while the Tests are executed
 Functional Tests
• Test Scripts are prepared which emulate the manual Tests
• Test automation tools can be used to execute tests
• Input data is picked up from Test DB Files.
• Executing same test case with different input is easy.
• Executing regression testing is easy.
 Load Tests
• Creating Load Test scenario manually is impactable/impossible.
• Can be done through simulations

Software Quality Assurance and Testing Page 63 of 68


Software Quality Assurance and Testing Page 64 of 68
• Load is varied and response plotted in the form of a graph to see system
behavior
• System response Vs Environment configuration (Hardware and
communication)

Test Automation Tools

Software Quality Assurance and Testing Page 64 of 68


Software Quality Assurance and Testing Page 65 of 68
Automated Test Management
Features of Automated Test Management
1. Test plans, Test results and correction follow-up.
2. Preparation of lists, tables and visual presentations of test plans
3. Listing of test case
4. Execution of automated software tests
5. Listing of test results
6. Listing of detected errors
7. Listing of correction schedule
8. Listing of uncompleted corrections for follow-up
9. Correction and Regression tests
10. Summary reports of testing and error correction follow-up
11. Summary reports for maintenance correction schedule

Automated Testing Advantages Vs Disadvantages

S. No. Advantages Disadvantages


Accuracy and completeness of Test High investments required in package
1
Execution purchasing and training
Accuracy of results log and summary High package development investment costs
2
reports
3 Comprehensive information High manpower resources for test preparation

Few manpower resources for test Considerable testing areas left uncovered
4 execution
5 Shorter testing periods
6 Performance of complete regression tests

7 Performance of test classes beyond the


scope of manual testing

Automated Testing
Automation testing is not a 100% replacement for Manual Testing
1. Does not work when the product itself is changing.
2. Works well in regression Testing
3. Test automation takes lots of efforts.
4. Maintenance cost of automated test cases is very high.
5. Resource Cost is high for Test Automation Engineers.
6. Trained resources are needed for Automated Test design, execution and maintenance.

Software Quality Assurance and Testing Page 65 of 68


Software Quality Assurance and Testing Page 66 of 68
Alpha and Beta Testing
Not same as customer’s acceptance Testing
Alpha Sites - Software is put into real usage among some of the teams in the organization
 E.g. Office getting deployed on Development team, HR team machines.
 Preparation of lists, tables and visual presentations of test plans
Beta Sites –
 Bugs found in formal Testing and alpha sites are already fixed.
 Product is close to final product with very few known bugs
 Product is ready to be used by customers
 Before releasing to all the customers, product is given to select few customers called ad Beta
Sites
Alpha and Beta Testing Advantages and Disadvantages
S. No. Advantages Disadvantages
1 Identification of unexpected errors A lack of systematic testing
A wider population in search of errors Low quality error reports. Repro not always
2
available
3 Low costs Difficult to reproduce the test environment
Real life data and scenarios can be Much effort is required to examine reports
4
tested
5 Users perspective known Loss of real data on users machines
Defect Prioritization
• Defects can be prioritized based on severity and probability of occurrence
• Fixing of defects by development team is based on defect priority.

Defects that unblock the Testing are given high priority and fixed early
Example Priority and Severity
• High Severity, Low priority
• Incorrect Logo.
• High priority, Low Severity
• Slow response to transaction.
• No functionality break but an irritant to Test team
• Hampers the productivity of Test team
• High priority, High Severity
• Data corruption issues
• Low priority, Low Severity
• Incorrect Message on Pop-Up

Software Quality Assurance and Testing Page 66 of 68


Software Quality Assurance and Testing Page 67 of 68
Regression Testing
• Regression Testing is done to ensure that everything is working perfectly which was
previously working.
• Regression testing is done after enhancements or Defect fixes.
S. No. Normal testing Regression testing
It is usually done during fourth phase of It is done during the maintenance phase.
1
SDLC.
It is basically software’s verification and It is also called program revalidation.
2
validation.
New test suites are used to test our code. Both old and new test cases can be used
3
for testing.
4 It is done on the original software. It is done on modified software.
5 It is cheaper. It is a costlier test plan.

Types of Regression Testing


• Corrective regression testing – No change in specification. Same Test Suit
• Progressive regression testing – Change in specification. Modified Test Suit.
• Retest-all regression testing – Full Re-Test
• Selective regression testing – Test only the impacted and dependent areas.
• Regression Testing can be done at
• Unit level, Integration level, System Level
• Criteria to decide when to stop regression testing.
• Time constraints
• Test criticality
• Customer requirements
Source Code Analysers
• Source code analysis tools, also known as Static Code Analyzers.
• No execution of program. Just static code analysis.
• To ensure code is compliant with a predefined set of rules or best practices.
• Static analysis is generally good at finding coding issues such as
• Programming errors
• Coding standard violations
• Undefined values
• Syntax violations
• Security vulnerabilities
Source Code Analysers Advantages and Disadvantages
• Advantages
• Speed
• Depth
• Accuracy
• Disadvantages
• False Positives

Software Quality Assurance and Testing Page 67 of 68


Software Quality Assurance and Testing Page 68 of 68
Software Composition Tools
• Software composition analysis (SCA) is an automated process that identifies the open
source software in a codebase. This analysis is performed to evaluate security, license
compliance, and code quality.
• It identifies all open source packages in an application and all the known vulnerabilities
of those packages
• Create an accurate Bill of Materials (BOM) - components included, the version of the
components used, and the license types for each.
• Discover and track all open source - Uncover all open source used in source code,
binaries, containers, build dependencies, subcomponents.
• Set and enforce policies – Ensure OSS license compliance.
• Enable proactive and continuous monitoring -

Software Quality Assurance and Testing Page 68 of 68

You might also like