0% found this document useful (0 votes)
3 views26 pages

Lecture 3

The document discusses the importance of software testing, the roles of test engineers and managers, and various testing levels including acceptance, system, integration, module, and unit testing. It also outlines Beizer's testing levels, the significance of test automation, and the distinction between faults, errors, and failures in software. Additionally, it emphasizes the need for a team approach in testing to enhance software quality.

Uploaded by

miraabdelaziz038
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views26 pages

Lecture 3

The document discusses the importance of software testing, the roles of test engineers and managers, and various testing levels including acceptance, system, integration, module, and unit testing. It also outlines Beizer's testing levels, the significance of test automation, and the distinction between faults, errors, and failures in software. Additionally, it emphasizes the need for a team approach in testing to enhance software quality.

Uploaded by

miraabdelaziz038
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Lecture 3

Dr. Hager Hussein

1
Introduction
Optimistic customers assume software
systems will never fail.
The ideas and techniques of software
testing have become essential knowledge
for all software developers.
Testing is the primary method that the
industry uses to evaluate software under
development.

2
A Test Engineer
Test Engineer : An IT professional who is in
charge of one or more technical test
activities such as
Designing test inputs
Producing test values
Running test scripts
Analyzing results
Reporting results to developers and
managers

3
A Test Manager
Test Manager:
Is in charge of one or more test engineers
Sets test policies and processes
Interacts with other managers on the
project
Supports the engineers

4
Testing Levels Based on Software
Activity
Acceptance Testing: assess software with
respect to requirements.
System Testing: assess software with
respect to architectural design.
Integration Testing: assess software with
respect to subsystem design.
Module Testing: assess software with
respect to detailed design.
Unit Testing: assess software with respect
to implementation.

5
Acceptance Testing
The requirements analysis phase of
software development captures the
customer’s needs. Acceptance testing is
designed to determine whether the
completed software in fact meets these
needs.
Acceptance testing must involve users or
other individuals who have strong domain
knowledge.

6
System Testing
The architectural design phase of software
development chooses components and
connectors that together realize a system whose
specification is intended to meet the previously
identified requirements.
System testing is designed to determine whether
the assembled system meets its specifications. It
assumes that the pieces work individually, and
asks if the system works as a whole. This level of
testing usually looks for design and specification
problems. It is usually not done by the
programmers, but by a separate testing team.

7
Integration System
 The subsystem design phase of software development
specifies the structure and behavior of subsystems,
each of which is intended to satisfy some function in
the overall architecture. Often, the subsystems are
adaptations of previously developed software.
 Integration testing is designed to assess whether the
interfaces between modules in a given subsystem
have consistent assumptions and communicate
correctly.
 Integration testing must assume that modules work
correctly. Some testing literature uses the terms
integration testing and system testing interchangeably.
 Integration testing is usually the responsibility of
members of the development team.
8
Module Testing
 The detailed design phase of software development
determines the structure and behavior of individual
modules.
 A program unit, or procedure, is one or more contiguous
program statements, with a name that other parts of the
software use to call it. Units are called functions in C and
C++.
 A module is a collection of related units that are
assembled in a file, package, or class.
 Module testing is designed to assess individual modules
in isolation, including how the component units interact
with each other and their associated data structures.
 Most software development organizations make module
testing the responsibility of the programmer.

9
Unit Testing
Implementation is the phase of software
development that actually produces code.
Unit testing is designed to assess the units
produced by the implementation phase and
is the “lowest” level of testing.
Most software development organizations
make unit testing the responsibility of the
programmer.

10
Beizer’s Testing Levels
 Beizer’s testing levels are based on test process maturity.
 Level 0 is the view that testing is the same as debugging.
 This model does not distinguish between a program’s
incorrect behavior and a mistake within the program, and
does very little to help develop software that is reliable or
safe.
 In Level 1 testing, the purpose is to show correctness.
 In Level 2 testing, the purpose is to show failures.
 The thinking that leads to Level 3 testing starts with the
realization that testing can show the presence, but not the
absence, of failures.
 Once the testers and developers are on the same “team,” an
organization can progress to real Level 4 testing. Level 4
thinking defines testing as a mental discipline that increases
quality.
11
Level 0 Thinking
Testing is the same as debugging

Does not distinguish between incorrect


behavior and mistakes in the program

Does not help develop software that is


reliable or safe

This is what we teach undergraduate CS


majors
12
Level 1 Thinking
Purpose is to show correctness
Correctness is impossible to achieve
What do we know if no failures?
Good software or bad tests?
Test engineers have no:
Strict goal
Real stopping rule
Formal test technique
Test managers are powerless

This is what hardware engineers often


13 expect
Level 2 Thinking
Purpose is to show failures

Looking for failures is a negative activity

Puts testers and developers into an


adversarial relationship

What if there are no failures?


This describes most software
companies.

14
How can we move to a team
approach ??
Level 3 Thinking
Testing can only show the presence of
failures

Whenever we use software, we incur some


risk

Risk may be small with unimportant


consequences

Risk may be great with catastrophic


Thisconsequences
describes a few “enlightened” software
15 companies
Level 4 Thinking
A mental discipline that increases quality

Testing is only one way to increase quality

Test engineers can become technical leaders


of the project

Primary responsibility to measure and improve


software quality

Their expertise should help the developers


This is the way “traditional” engineering
16
works
Automation of Test Activities
Software testing requires up to 50% of
software development costs, and even
more for safety-critical applications.
One of the goals of software testing is to
automate as much as possible, thereby
significantly reducing its cost, minimizing
human error, and making regression testing
easier.
Regression testing: is testing that is done
after changes are made to the software,
and its purpose is to help ensure that the
updated software still possesses the
17
functionality it had before the updates.
Fault and Failure
To move from Level 0 thinking to Level 1
thinking, you need to understand the
difference between fault and failure.
System Fault: A static defect in the
software.
Software Error: An incorrect internal state
that is the manifestation of some fault.
Software Failure: External, incorrect
behavior with respect to the requirements
or other description of the expected
behavior.
18
Example
Consider a medical doctor making a
diagnosis for a patient. The patient enters
the doctor’s office with a list of failures
(that is, symptoms).
The doctor then must discover the fault, or
root cause of the symptom.
To aid in the diagnosis, a doctor may order
tests that look for anomalous internal
conditions, such as high blood pressure, an
irregular heartbeat, high levels of blood
glucose, or high cholesterol. In our
terminology, these anomalous internal
19
conditions correspond to errors.
Technical Example
 The fault in this program is
that it doesn’t start looking for zeroes
at index 1 instead of
index 0, as is necessary for arrays
in Java.
 For example,
numZero ([2, 7, 0]) correctly
evaluates to 1, while
numZero ([0, 7, 2]) incorrectly evaluates to 0. In
both of these cases the fault is executed. Although
both of these cases result in an error, only the
second case results in failure.
20
A state is in error simply if it is not the
expected state, even if all of the values in
the state, considered in isolation, are
acceptable. More generally, if the required
sequence of states is s0,s1,s2,... , and the
actual sequence of states is s0,s2,s3,..,
then state s2 is in error in the second
sequence.
In the second case, the error propagates to
the variable count and is present in the
return value of the method. Hence a failure
results.
21
Fault/Failure Model (RIP
Model)
Of course the central issue is that for a given fault,
not all inputs will “trigger” the fault into creating
incorrect output (a failure).
This leads to the fault/failure model, which states
that three conditions must be present for a failure
to be observed.
1. The location or locations in the program that
contain the fault must be reached (Reachability).
2. After executing the location, the state of the
program must be incorrect (Infection).
3. The infected state must propagate to cause some
output of the program to be incorrect
(Propagation).
22
What is Test Automation?
The use of software to control the
execution of tests, the comparison of actual
outcomes to predicted outcomes, the
setting up of test preconditions, and other
test control and test reporting functions
Reduces cost
Reduces human error
Reduces variance in test quality from
different individuals
Significantly reduces the cost of regression
testing
23
Test Case
Test Case: A test case is composed of the
test case values, expected results, prefix
values, and postfix values necessary for a
complete execution and evaluation of the
software under test.
Test Set: A set of test cases.
Executable Test Script: A test case that is
prepared in a form to be executed
automatically on the software test and
produce a report.

24
Prefix Values: Any inputs necessary to put
the software into the appropriate state to
receive the test case values.
Postfix Values: Any inputs that need to be
sent to the software after the test case
values are sent.
Postfix values types:
Verification Values: Values necessary to see
the results of the test case values.
Exit commands: Values needed to terminate
the program or otherwise return it to a stable
state.
25
Thank You

26

You might also like