0% found this document useful (0 votes)
13 views

Week 13 Lecture # 1

This document discusses software quality, factors that influence quality, and how it can be achieved. It defines software quality as using effective processes to create useful products that provide value. Quality is achieved through software engineering methods, project management, quality control, and assurance. Key factors include functionality, reliability, usability, efficiency and maintainability.

Uploaded by

hs5849377
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Week 13 Lecture # 1

This document discusses software quality, factors that influence quality, and how it can be achieved. It defines software quality as using effective processes to create useful products that provide value. Quality is achieved through software engineering methods, project management, quality control, and assurance. Key factors include functionality, reliability, usability, efficiency and maintainability.

Uploaded by

hs5849377
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 56

Itroduction to Software

Quality, Quality Management,


Testing
Lecture # 1,2
Topics Include

 Software Qulity
 Software Quality Factors
 Achieving Software Quality
 The drumbeat for improved software quality
began in earnest as software became
increasingly integrated in every facet of our lives.
By the 1990s, major corporations recognized that
billions of dollars each year were being wasted
on software that didn’t deliver the features and
functionality that were promised.
Both government and industry became
increasingly concerned that a major software fault
might cripple important infrastructure, costing tens
of billions more.
 By the turn of the century, CIO Magazine
trumpeted the headline, “Let’s Stop Wasting $78
Billion a Year,” lamenting the fact that “American
businesses spend billions for software that
doesn’t do what it’s supposed to do.”
How do we define software quality?

 In the most general sense, software quality can


be defined1 as: An effective software process
applied in a manner that creates a useful product
that provides measurable value for those who
produce it and those who use it.
 The definition serves to emphasize three
important points:
 An effective software process establishes the
infrastructure that supports any effort at building a
high-quality software product.
 The management aspects of process create the
checks and balances that help avoid project
chaos—a key contributor to poor quality.
 Software engineering practices allow the
developer to analyze the problem and design a
solid solution—both critical to building high-
quality software.
 Finally, umbrella activities such as change
management and technical reviews have as
much to do with quality as any other part of
software engineering practice.
 A useful product delivers the content, functions,
and features that the end user desires, but as
important, it delivers these assets in a reliable,
error-free way.
 A useful product always satisfies those
requirements that have been explicitly stated by
stakeholders. In addition, it satisfies a set of
implicit requirements (e.g., ease of use) that are
expected of all high-quality software.
 By adding value for both the producer and user of
a software product, high_x0002_quality software
provides benefits for the software organization
and the end_x0002_user community.
 The software organization gains added value
because high-quality software requires less
maintenance effort, fewer bug fixes, and reduced
customer support. This enables software
engineers to spend more time creating new
applications and less on rework.
Garvin’s Quality Dimensions

 Performance quality. Does the software deliver


all content, functions, and features that are
specified as part of the requirements model in a
way that provides value to the end user?
 Feature quality. Does the software provide
features that surprise and delight first-time end
users?
 Reliability. Does the software deliver all features
and capability without failure? Is it available when
it is needed? Does it deliver functionality that is
error-free?
 Conformance. Does the software conform to
local and external software standards that are
relevant to the application? Does it conform to de
facto design and coding conventions? For
example, does the user interface
con_x0002_form to accepted design rules for
menu selection or data input?
 Durability. Can the software be maintained
(changed) or corrected (debugged) without the
inadvertent generation of unintended side
effects? Will changes cause the error rate or
reliability to degrade with time?
 Serviceability. Can the software be maintained
(changed) or corrected (debugged) in an
acceptably short time period? Can support staff
acquire all information they need to make
changes or correct defects?
 Aesthetics. There’s no question that each of us
has a different and very subjective vision of what
is aesthetic. And yet, most of us would agree that
an aesthetic entity has a certain elegance, a
unique flow, and an obvious “presence” that are
hard to quantify but are evident nonetheless.
 Perception. In some situations, you have a set of
prejudices that will influence your perception of
quality. For example, if you are introduced to a
soft_x0002_ware product that was built by a
vendor who has produced poor quality in the
past, your guard will be raised and your
perception of the current soft_x0002_ware
product quality might be influenced negatively.
Similarly, if a vendor has an excellent reputation,
you may perceive quality, even when it does not
really exist.
ISO 9126 Quality Factors

 Functionality. The degree to which the software


satisfies stated needs as indicated by the
following subattributes: suitability, accuracy,
interoperability,compliance, and security.
 Reliability. The amount of time that the software
is available for use as indi_x0002_cated by the
following subattributes: maturity, fault tolerance,
recoverability.
 Usability. The degree to which the software is
easy to use as indicated by the following
subattributes: understandability, learnability,
operability.
 Efficiency. The degree to which the software
makes optimal use of system resources as
indicated by the following subattributes: time
behavior, resource behavior.
 Maintainability. The ease with which repair may
be made to the software asindicated by the
following subattributes: analyzability,
changeability, stability, testability.
 Portability. The ease with which the software can
be transposed from one environment to another
as indicated by the following subattributes:
adapt_x0002_ability, installability, conformance,
replaceability
McCall’s Quality Factors

 Correctness. The extent to which a program


satisfies its specification and fulfills the
customer’s mission objectives.
 Reliability. The extent to which a program can be
expected to perform its intended func_x0002_tion
with required precision.
 Efficiency. The amount of computing resources
and code required by a program to perform its
function.
 Integrity. Extent to which access to software or
data by unauthorized persons can be controlled.
 Usability. Effort required to learn, operate,
prepare input for, and interpret output of a
program.

 Maintainability. Effort required to locate and fix


an error in a program.
 Flexibility. Effort required to modify an
operational program.
 Testability. Effort required to test a program to
ensure that it performs its intended function.
 Portability. Effort required to transfer the
program from one hardware and/or software
system environment to another.
 Reusability. Extent to which a program [or parts
of a program] can be reused in other applications
—related to the packaging and scope of the
functions that the program performs.
 Interoperability. Effort required to couple one
system to another.
 It is difficult, and in some cases impossible, to
develop direct measures2 of these quality factors.
In fact, many of the metrics defined by McCall et
al. can be measured only indirectly.
ACHIEVING SOFTWARE QUALITY

 Software quality doesn’t just appear. It is the


result of good project management and solid
software engineering practice. Management and
practice are applied within the context of four
broad activities that help a software team achieve
high software quality:
 software engineering methods, project
management techniques, quality control actions,
and software quality assurance.
Software Engineering Methods

 If you expect to build high-quality software, you


must understand the problem to be solved.
 You must also be capable of creating a design
that conforms to the problem while at the same
time exhibiting characteristics that lead to
software that exhibits the quality dimensions and
factors
Project Management Techniques

 (1) a project manager uses estimation to verify


that delivery dates are achievable,
 (2) schedule dependencies are understood and
the team resists the temptation to use short cuts,
(3) risk planning is conducted so problems do not
breed chaos, software quality will be affected in a
positive way.
Quality Control
 Quality control encompasses a set of software
engineering actions that help to ensure that each
work product meets its quality goals. Models are
reviewed to ensure that they are complete and
consistent.
 Code may be inspected in order to uncover and
correct errors before testing commences. A series
of testing steps is applied to uncover errors in
processing logic, data manipulation, and interface
communication.
 A combination of measurement and
feedback allows a software team to tune
the process when any of these work
products fail to meet quality goals.
Quality Assurance

 Quality assurance establishes the infrastructure


that supports solid software engineering
methods, rational project management, and
quality control actions—all pivotal if you intend to
build high-quality software.
 In addition, quality assurance consists of a set of
auditing and reporting functions that assess the
effectiveness and completeness of quality control
actions.
 The goal of quality assurance is to provide
management and technical staff with the data
necessary to be informed about product quality,
thereby gaining insight and confidence that
actions to achieve product quality are working. Of
course, if the data provided through quality
assurance identifies problems, it is
management’s responsibility to address the
problems and apply the necessary resources to
resolve quality issues.
THE SOFTWARE QUALITY
DILEMMA
 If you produce a software system that has terrible
quality, you lose because no one will want to buy
it. If on the other hand you spend infinite time,
extremely large effort, and huge sums of money
to build the absolutely perfect piece of software,
then it’s going to take so long to complete and it
will be so expensive to produce that you’ll be out
of business anyway.
 Either you missed the market window, or you
simply exhausted all your resources. So people in
industry try to get to that magical middle round
where the prod_x0002_uct is good enough not to
be rejected right away, such as during evaluation,
but also not the object of so much perfectionism
and so much work that it would take too long or
cost too much to complete.
“Good Enough” Software

 If we are to accept the argument made by Meyer,


is it acceptable to produce “good enough”
software? The answer to this question must be
“yes,” because major software companies do it
every day.
 They create software with known bugs and
deliver it to a broad population of end users. They
recognize that some of the functions and features
delivered in Version 1.0 may not be of the highest
quality and plan for improvements in Version 2.0.
They do this knowing that some
cus_x0002_tomers will complain, but they
recognize that time-to-market may trump better
qual_x0002_ity as long as the delivered product
is “good enough.”
The Cost of Quality

 The argument goes something like this—we


know that quality is important, but it costs us time
and money—too much time and money to get the
level of software quality we really want.
 On its face, this argument seems reasonable
(see Meyer’s comments ear_x0002_lier in this
section). There is no question that quality has a
cost, but lack of quality also has a cost—not only
to end users who must live with buggy software,
but also to the software organization that has built
and must maintain it.
 The real question is this: which cost should we be
worried about? To answer this question, you must
understand both the cost of achieving quality and
the cost of low-quality software.
Software Testing

 Software testing is a crucial phase in the software


development life cycle that aims to identify
defects, ensure the correctness of software
functionalities, and deliver a high-quality product
to end-users. Testing involves the execution of a
software application or system to evaluate its
behavior against specified requirements.
 Testing activity helps in finding and fixing bugs,
verifying that the software meets the intended
functionality, and ensuring its reliability and
performance.
 Testing is the process of evaluating a system or
its component(s) with the intent to find whether it
satisfies the specified requirements or not.
 Testing is executing a system in order to identify
any gaps, errors, or missing requirements in
contrary to the actual requirements.
Why Software Testing?

 In the IT industry, large companies have a team


with responsibilities to evaluate the developed
software in context of the given requirements.
Moreover, developers also conduct testing which
is called Unit Testing. In most cases, the following
professionals are involved in testing a system
within their respective capacities
 Software Tester
 Software Developer
 Project Lead/Manager
 End User
 Different designations for people who test the
software on the basis of their experience and
knowledge such as Software Tester, Software
Quality Assurance Engineer, QA Analyst, etc.
When to Start Testing?

 An early start to testing reduces the cost and time


to rework and produce error-free software that is
delivered to the client. However in Software
Development Life Cycle (SDLC), testing can be
started from the Requirements Gathering phase
and continued till the deployment of the software.
 It also depends on the development model that is
being used. For example, in the Waterfall model,
formal testing is conducted in the testing phase;
but in the incremental model, testing is performed
at the end of every increment/iteration and the
whole application is tested at the end.
When to Stop Testing?

 It is difficult to determine when to stop testing, as


testing is a never-ending process and no one can
claim that a software is 100% tested. The
following aspects are to be considered for
stopping the testing process
 Testing Deadlines
 Completion of test case execution
 Completion of functional and code coverage to a
certain point
 Bug rate falls below a certain level and no high-
priority bugs are identified
 Management decision
Types of Testing:

 Unit Testing: Focuses on testing individual units


or components of the software in isolation.
 Integration Testing: Verifies the interaction
between different components or systems to
ensure they work together as intended.
 System Testing: Evaluates the entire system to
ensure that it meets the specified requirements.
 Acceptance Testing: Ensures that the software
satisfies user or business requirements.
Common Testing Techniques

Black Box Testing: Focuses on validating the


functionality of the software without knowledge of
its internal code or structure.
 White Box Testing: Examines the internal logic
and structure of the software code.
 Grey Box Testing: Combines aspects of both
black box and white box testing.

You might also like