Week 12 Notes Updated
Week 12 Notes Updated
Chapter 19 & 20
Software Testing
© 2020 McGraw Hill. All rights reserved. Authorized only for instructor use in the classroom.
No reproduction or further distribution permitted without the prior written consent of McGraw Hill.
In this chapters you would learn:
• Testing strategy
• White-box testing
• Black-box testing
• OO testing
• Integration testing
© McGraw Hill 2
Strategic Approach to Testing
• You should conduct effective technical reviews this can
eliminate many errors before testing begins.
• Testing begins at the component level and works "outward"
toward the integration of the entire system.
• Different testing techniques are appropriate for different
software engineering approaches and at different points in
time.
• Testing is conducted by the developer of the software and (for
large projects) an independent test group.
• Testing and debugging are different activities, but debugging
must be accommodated in any testing strategy.
© McGraw Hill 3
Verification and Validation
Verification refers to the set of tasks that ensure that software
correctly implements a specific function.
Verification: Are we building the product right?
© McGraw Hill 4
Organizing for Testing
• Software developers are always responsible for testing
individual program components and ensuring that each
performs its deigned function or behavior.
• Only after the software architecture is complete does an
independent test group become involved.
• The role of an independent test group (ITG) is to remove the
inherent problems associated with letting the builder test the
thing that has been built.
• ITG personnel are paid to find errors.
• Developers and ITG work closely throughout a software
project to ensure that thorough tests will be conducted.
© McGraw Hill 5
Testing Strategy
© McGraw Hill 6
Testing the Big Picture
• Unit testing begins at the center of the spiral and concentrates
on each unit (for example, component, class, or content object)
as they are implemented in source code.
• Testing progresses to integration testing, where the focus is on
design and the construction of the software architecture.
Taking another turn outward on the spiral.
• Validation testing, is where requirements established as part of
requirements modeling are validated against the software that
has been constructed.
• In system testing, the software and other system elements are
tested as a whole.
© McGraw Hill 7
Software Testing Steps
© McGraw Hill 8
When is Testing Done?
• You’re never done testing; the burden simply shifts from the
software engineer to the end user. (Wrong).
• You’re done testing when you run out of time or you run out of
money. (Wrong).
• The statistical quality assurance approach suggests executing
tests derived from a statistical sample of all possible program
executions by all targeted users.
• By collecting metrics during software testing and making use
of existing statistical models, it is possible to develop
meaningful guidelines for answering the question: “When are
we done testing?”
© McGraw Hill 9
Test Planning
1. Specify product requirements in a quantifiable manner long before testing
commences.
2. State testing objectives explicitly.
3. Understand the users of the software and develop a profile for each user
category.
4. Develop a testing plan that emphasizes “rapid cycle testing.”
5. Build “robust” software that is designed to test itself.
6. Use effective technical reviews as a filter prior to testing.
7. Conduct technical reviews to assess the test strategy and test cases
themselves.
8. Develop a continuous improvement approach for the testing process.
© McGraw Hill 10
Unit Test Environment
© McGraw Hill 11
Cost Effective Testing
• Exhaustive testing requires every possible combination and
ordering of input values be processed by the test component.
• The return on exhaustive testing is often not worth the effort,
since testing alone cannot be used to prove a component is
correctly implemented.
• Testers should work smarter and allocate their testing
resources on modules crucial to the success of the project or
those that are suspected to be error-prone as the focus of their
unit testing.
© McGraw Hill 12
Test Case Design
Design unit test cases before you develop code for a component to ensure that
code that will pass the tests.
Test cases are designed to cover the following areas:
© McGraw Hill 13
Module Tests
© McGraw Hill 14
Error Handling
• A good design anticipates error conditions and establishes error-
handling paths which must be tested.
• Among the potential errors that should be tested when error
handling is evaluated are:
1. Error description is unintelligible.
2. Error noted does not correspond to error encountered.
3. Error condition causes system intervention prior to error handling,
4. Exception-condition processing is incorrect.
5. Error description does not provide enough information to assist in the
location of the cause of the error.
© McGraw Hill 15
Traceability
• To ensure that the testing process is auditable, each test case
needs to be traceable back to specific functional or
nonfunctional requirements or anti-requirements.
• Often nonfunctional requirements need to be traceable to
specific business or architectural requirements.
• Many test process failures can be traced to missing
traceability paths, inconsistent test data, or incomplete test
coverage.
• Regression testing requires retesting selected components that
may be affected by changes made to other collaborating
software components.
© McGraw Hill 16
White Box Integration Testing
• White-box testing, is an integration testing philosophy that
uses implementation knowledge of the control structures
described as part of component-level design to derive test
cases.
• White-box tests can be only be designed after source code
exists and program logic details are known.
• Logical paths through the software and collaborations between
components are the focus of white-box integration testing.
• Important data structures should also be tested for validity after
component integration.
© McGraw Hill 17
White Box Testing
Using white-box testing methods, you can derive test cases that:
© McGraw Hill 18
Basis Path Testing 1
© McGraw Hill 19
Flowchart (a) and Flow Graph (b)
© McGraw Hill 20
Basis Path Testing 2
© McGraw Hill 21
Basis Path Testing 3
© McGraw Hill 22
Control Structure Testing
• Condition testing is a test-case design method that exercises
the logical conditions contained in a program module.
• Data flow testing selects test paths of a program according to
the locations of definitions and uses of variables in the
program.
• Loop testing is a white-box testing technique that focuses
exclusively on the validity of loop constructs.
© McGraw Hill 23
Classes of Loops
© McGraw Hill 24
Loop Testing
Test cases for simple loops: Test cases for nested loops:
1. Start at the innermost loop. Set all other
1. Skip the loop entirely.
loops to minimum values.
2. Only one pass through the loop.
2. Conduct simple loop tests for the
3. Two passes through the loop.
innermost loop while holding the outer
4. m passes through the loop where loops at their minimum iteration
m < n. parameter (for example, loop counter)
5. n − 1, n, n + 1 passes through the values.
loop.
3. Add other tests for out-of-range or
excluded values.
4. Work outward, conducting tests for the
next loop, but keeping all other outer
loops at minimum values and other
nested loops to “typical” values.
5. Continue until all loops have been tested.
© McGraw Hill 25
Black Box Testing 1
© McGraw Hill 26
Black Box Testing 2
© McGraw Hill 27
Black Box – Interface Testing
• Interface testing is used to check that a program component
accepts information passed to it in the proper order and data
types and returns information in proper order and data format.
• Components are not stand-alone programs testing interfaces
requires the use stubs and drivers.
• Stubs and drivers sometimes incorporate test cases to be
passed to the component or accessed by the component.
• Debugging code may need to be inserted inside the
component to check that data passed was received correctly.
© McGraw Hill 28
Object-Oriented Testing (OOT)
To adequately test OO systems, three things must be done:
• The definition of testing must be broadened to include error
discovery techniques applied to object-oriented analysis and
design models.
• The strategy for unit and integration testing must change
significantly.
• The design of test cases must account for the unique
characteristics of OO software.
© McGraw Hill 29
Black Box – Boundary Value Analysis
(BV A)
• Boundary value analysis leads to a selection of test cases that exercise
bounding values.
• Guidelines for BVA:
© McGraw Hill 30
OOT – Class Testing
• Class testing for object-oriented (OO) software is the
equivalent of unit testing for conventional software.
• Unlike unit testing of conventional software, which tends to
focus on the algorithmic detail of a module and the data that
flow across the module interface.
• Class testing for OO software is driven by the operations
encapsulated by the class and the state behavior of the class.
• Valid sequences of operations and their permutations are used
to test that class behaviors - equivalence partitioning can
reduce number sequences needed,
© McGraw Hill 31
OOT– Behavior Testing
• A state diagram can be used to help derive a sequence of tests
that will exercise dynamic behavior of the class.
• Tests to be designed should achieve full coverage by using
operation sequences cause transitions through all allowable
states.
• When class behavior results in a collaboration with several
classes, multiple state diagrams can be used to track system
behavioral flow.
• A state model can be traversed in a breadth-first manner by
having test case exercise a single transition and when a new
transition is to be tested only previously tested transitions are
used.
© McGraw Hill 32
Testing Fundamentals
Attributes of a good test:
• A good test has a high probability of finding an error.
• A good test is not redundant.
• A good test should be “best of breed.”
• A good test should be neither too simple nor too complex.
© McGraw Hill 33
Integration Testing
• Integration testing is a systematic technique for constructing
the software architecture while conducting tests to uncover
errors associated with interfacing.
• The objective is to take unit-tested components and build a
program structure that matches the design.
• In the big bang approach, all components are combined at
once and the entire program is tested as a whole. Chaos usually
results!
• In incremental integration a program is constructed and tested
in small increments, making errors easier to isolate and correct.
Far more cost-effective!
© McGraw Hill 34
Top-Down Integration 1
• Top-down integration testing is an incremental approach to construction of
the software architecture.
• Modules are integrated by moving downward through the control hierarchy,
beginning with the main control module (main program).
• Modules subordinate to the main control module are incorporated into the
structure followed by their subordinates.
• Depth-first integration integrates all components on a major control path
of the program structure before starting another major control path.
• Breadth-first integration incorporates all components directly subordinate
at each level, moving across the structure horizontally before moving down
to the next level of subordinates.
© McGraw Hill 35
Top-Down Integration 2
© McGraw Hill 36
Top-Down Integration Testing
1. The main control module is used as a test driver, and stubs are
substituted for all components directly subordinate to the
main control module.
2. Depending on the integration approach selected (for example,
depth or breadth first), subordinate stubs are replaced one at a
time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced
with the real component.
5. Regression testing may be conducted to ensure that new errors
have not been introduced.
© McGraw Hill 37
Bottom-Up Integration Testing
Bottom-up integration testing, begins construction and testing
with atomic modules components at the lowest levels in the
program structure.
1. Low-level components are combined into clusters (builds)
that perform a specific software subfunction.
2. A driver (a control program for testing) is written to
coordinate test-case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined, moving
upward in the program structure.
© McGraw Hill 38
Bottom-Up Integration
© McGraw Hill 39
Continuous Integration
• Continuous integration is the practice of merging components
into the evolving software increment at least once a day.
• This is a common practice for teams following agile
development practices such as XP or DevOps. Integration
testing must take place quickly and efficiently if a team is
attempting to always have a working program in place as part
of continuous delivery.
• Smoke testing is an integration testing approach that can be
used when software is developed by an agile team using short
increment build times.
© McGraw Hill 40
Smoke Testing Integration
1. Software components that have been translated into code are
integrated into a build. – that includes all data files, libraries,
reusable modules, and components required to implement one
or more product functions.
2. A series of tests is designed to expose “show-stopper” errors
that will keep the build from properly performing its function
cause the project to fall behind schedule.
3. The build is integrated (either top-down or bottom-up) with
other builds, and the entire product (in its current form) is
smoke tested daily.
© McGraw Hill 41
Smoke Testing Advantages
• Integration risk is minimized, since smoke tests are run daily.
• Quality of the end product is improved, functional and
architectural problems are uncovered early.
• Error diagnosis and correction are simplified, errors are
most likely in (or caused by) the new build.
• Progress is easier to assess, each day more of the final
product is complete.
• Smoke testing resembles regression testing by ensuring newly
added components do not interfere with the behaviors of
existing components.
© McGraw Hill 42
Regression Testing
• Regression testing is the re-execution of some subset of tests that have
already been conducted to ensure that changes have not propagated
unintended side effects.
• Whenever software is corrected, some aspect of the software configuration
(the program, its documentation, or the data that support it) is changed.
• Regression testing helps to ensure that changes (due to testing or for other
reasons) do not introduce unintended behavior or additional errors.
• Regression testing may be conducted manually, by re-executing a subset of
all test cases or using automated capture/playback tools.
• AI tools may be able to help select the best subset of test cases to use in
regression automatically based on previous experiences of the developers
with the evolving software product.
© McGraw Hill 43
Validation Testing
• Validation testing tries to uncover errors, but the focus is at the
requirements level - on user visible actions and user-recognizable output
from the system.
• Validation testing begins at the culmination of integration testing, the
software is completely assembled as a package and errors have been
corrected.
• Each user story has user-visible attributes, and the customer’s acceptance
criteria which forms the basis for the test cases used in validation-testing.
• A deficiency list is created when a deviation from a specification is
uncovered and their resolution is negotiated with all stakeholders.
• An important element of the validation process is a configuration review
(audit) that ensures the complete system was built properly.
© McGraw Hill 44