Principles of Testing Istqb 6
Principles of Testing Istqb 6
Principles of Testing
4 Dynamic test
5 Management 6 Tools
techniques
Principles
4 5 6
Contents
Why testing is necessary
Fundamental test process
Psychology of testing
Re-testing and regression testing
Expected results
Prioritisation of tests
Testing terminology
… that creates a
fault in the
software ...
■ huge sums
- Ariane 5 ($7billion)
- Mariner space probe to Venus ($250m)
- American Airlines ($50m)
■ very little or nothing at all
- minor inconvenience
- no visible or physical detrimental impact
■ software is not “linear”:
- small input may have very large effect
Safety-critical systems
■
It depends on RISK
- risk of missing important faults
- risk of incurring failure costs
- risk of releasing untested or under-tested software
- risk of losing credibility and market share
- risk of missing a market window
- risk of over-testing, ineffective testing
So little time, so much to test ..
Prioritise tests
so that,
whenever you stop testing,
you have done the best testing
in the time available.
Testing and quality
■ contractual requirements
■ legal requirements
■ industry-specific requirements
- e.g. pharmaceutical industry (FDA), compiler
standard tests, safety-critical or safety-related such
as railroad switching, air traffic control
It is difficult to determine
how much testing is enough
but it is not impossible
Principles
4 5 6
Contents
Why testing is necessary
Fundamental test process
Psychology of testing
Re-testing and regression testing
Expected results
Prioritisation of tests
Test Planning - different levels
Test
Policy
Company level
Test
Strategy
High
HighLevel
Level Project level (IEEE 829)
Test
TestPlan
Plan (one for each project)
check
specification execution recording
completion
Test planning
check
specification execution recording
completion
Identify conditions
Design test cases
Build tests
A good test case
Finds faults
■ effective
■ exemplary
Represents others
■ evolvable
Easy to maintain
■ economic
Cheap to use
Test specification
✔
Best set
✘
First set Time
Task 2: design test cases
(determine ‘how’ the ‘what’ is to be tested)
■ design test input and test data
- each test exercises one or more test conditions
■ determine expected results
- predict the outcome of each test case, what is
output, what is changed and what is not changed
■ design sets of tests
- different test sets for different objectives such as
regression, building confidence, and finding faults
Most important
Designing test cases test conditions
Least important
Importance test conditions
Test cases
Time
Task 3: build test cases
(implement the test cases)
■ prepare test scripts
- less system knowledge tester has the more detailed
the scripts will have to be
- scripts for tools have to specify every detail
■ prepare test data
- data that must exist in files and databases at the start
of the tests
■ prepare expected results
- should be defined before the test is executed
Test execution
check
specification execution recording
completion
Execution
check
specification execution recording
completion
Test recording 1
check
specification execution recording
completion
Check test completion
check
specification execution recording
completion
Coverage
OK
Test completion criteria
Planning Intellectual
one-off
Specification activity Good to
activity automate
Execute repeated
many times
Recording Clerical
Principles
4 5 6
Contents
Why testing is necessary
Fundamental test process
Psychology of testing
Re-testing and regression testing
Expected results
Prioritisation of tests
Why test?
■ build confidence
■ prove that the software is correct
■ demonstrate conformance to requirements
■ find faults
■ reduce costs
■ show system meets user needs
■ assess the software quality
Confidence
Confidence
Fault
Faultsfound
found
Time
Low High
Software Quality
You may
be here
Low
A traditional testing approach
■ Show that the system:
- does what it should
- doesn't do what it shouldn't
Goal: show working
Success: system works
■ A destructive process
■ Bring bad news (“your baby is ugly”)
■ Under worst time pressure (at the end)
■ Need to take a different view, a different
mindset (“What if it isn’t?”, “What could go
wrong?”)
■ How should fault information be
communicated (to authors and managers?)
Tester’s have the right to:
- accurate information about progress and changes
- insight from developers about areas of the software
- delivered code tested to an agreed standard
- be regarded as a professional (no abuse!)
- find faults!
- challenge specifications and test plans
- have reported faults taken seriously (unreproducible)
- make predictions about future fault levels
- improve your own testing process
Testers have responsibility to:
4 5 6
Contents
Why testing is necessary
Fundamental test process
Psychology of testing
Re-testing and regression testing
Expected results
Prioritisation of tests
Re-testing after faults are fixed
x
x
x
Fault now fixed
Re-test to check
Regression test
x
x
x
Can’t guarantee
to find them all
Regression testing 1
4 5 6
Contents
Why testing is necessary
Fundamental test process
Psychology of testing
Re-testing and regression testing
Expected results
Prioritisation of tests
Expected results
A Program:
3 6?
Read A
IF (A = 8) THEN
PRINT (“10”)
ELSE 8 10?
PRINT (2*A)
4 5 6
Contents
Why testing is necessary
Fundamental test process
Psychology of testing
Re-testing and regression testing
Expected results
Prioritisation of tests
Prioritising tests
Prioritise tests
so that,
whenever you stop testing,
you have done the best testing
in the time available.
How to prioritise?
4 5 6
4 Dynamic test
5 Management 6 Tools
techniques
Lifecycle
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
V-Model: test levels
Business Acceptance
Requirements Testing
System System
Specification Testing
Component
Code
Testing
V-Model: late test design
Tests
Business Acceptance
Requirements Testing
Tests
Project “We don’t have Integration Testing
Specification time to design in the Large
tests early”Tests
System System
Specification Testing
Tests
Design Integration Testing
Specification in the Small
Tests
Component
Code
Testing Design
Tests?
V-Model: early test design
Tests Tests
Business Acceptance
Requirements Testing
Tests Tests
Project Integration Testing
Specification in the Large
Tests Tests
System System
Specification Testing
Tests Tests
Design Integration Testing
Specification in the Small
Tests Tests
Component
Design Code
Testing Run
Tests Tests
Early test design
2 mo 2 mo
Phase 1: Plan
dev test
"has to go in"
but didn't work
Actual
fraught, lots of dev overtime
2 mo
22 mo 662wks
mo
Phase
Phase1: Plan mo wks
Phase 2:
2: Plan
Plan dev test
dev
dev test
test "has totest:
go in"
acc
acc test: full
full
but didn't
week work
week (vs
(vs half
half day)
day)
Actual
Actual on
Actual on time
time
fraught, lots of dev overtime
smooth,
smooth, not
not much
much for
for dev
dev to
to do
do
test
test 1st1st
mo. users
Quality
Quality test 1st mo.
mo. not
happy
Quality 150 faults 500 faults happy
happy
50
50 faults
faults 0 faults
faults users!
users!
Validation
Testing
Any
Verification
V-model exercise
The V Model - Exercise
Build Assembly
VD Review VD
Assemblage Test
Build System
DS Review DS
System Test
Build Integration
FD Review FD
Components Test
Build Exceptions:
TD Review TD FUT
Units
Conversion Test
FOS: DN/Gldn
Code TUT
How would you test this spec?
■ Compared to what?
■ What is the cost of NOT testing, or of faults
missed that should have been found in test?
- Cost to fix faults escalates the later the fault is found
- Poor quality software costs more to use
• users take more time to understand what to do
• users make more mistakes in using it
• morale suffers
• => lower productivity
■ Do you know what it costs your organisation?
What do software faults cost?
£700 £50
Hypothetical Cost - 2
1000
100
10
(10 minutes)
Lifecycle
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
(Before planning for a set of tests)
See: Structured Testing, an introduction to TMap®, Pol & van Veenendaal, 1998
High level test planning
■ 4 Features to be tested
- identify test design specification / techniques
■ 5 Features not to be tested
- reasons for exclusion
Test Plan 3
■ 6 Approach
- activities, techniques and tools
- detailed enough to estimate
- specify degree of comprehensiveness (e.g.
coverage) and other completion criteria (e.g. faults)
- identify constraints (environment, staff, deadlines)
■ 7 Item Pass/Fail Criteria
■ 8 Suspension criteria and resumption criteria
- for all or parts of testing activities
- which activities must be repeated on resumption
Test Plan 4
■ 9 Test Deliverables
- Test plan
- Test design specification
- Test case specification
- Test procedure specification
- Test item transmittal reports
- Test logs
- Test incident reports
- Test summary reports
Test Plan 5
■ 10 Testing tasks
- including inter-task dependencies & special skills
■ 11 Environment
- physical, hardware, software, tools
- mode of usage, security, office space
■ 12 Responsibilities
- to manage, design, prepare, execute, witness, check,
resolve issues, providing environment, providing
the software to test
Test Plan 6
■ 13 Staffing and Training Needs
■ 14 Schedule
- test milestones in project schedule
- item transmittal milestones
- additional test milestones (environment ready)
- what resources are needed when
■ 15 Risks and Contingencies
- contingency plan for each identified risk
■ 16 Approvals
- names and when approved
Lifecycle
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
Component testing
■ lowest level
■ tested in isolation
■ most thorough look at detail
- error handling
- interfaces
■ usually done by programmer
■ also known as unit, module, program testing
Component test strategy 1
Test Document
Project
Hierarchy Component
Test Plan
Component
Test Plan
Source: BS 7925-2,
Software Component
Testing Standard, Component
Annex A Test
Specification
Component
Test Report
Component test process
BEGIN
Component
Test Planning
Component
Test Specification
Component
Test Execution
Component
Test Recording
Checking for
Component END
Test Completion
Component test process
BEGIN Component test planning
- how the test strategy and
Component
Test Planning
project test plan apply to
the component under test
Component - any exceptions to the strategy
Test Specification - all software the component
will interact with (e.g. stubs
Component
Test Execution
and drivers
Component
Test Recording
Checking for
Component END
Test Completion
Component test process
BEGIN
Component
Test Planning
Component
Test Specification
Component test execution
Component - each test case is executed
Test Execution - standard does not specify
whether executed manually
Component
Test Recording or using a test execution
tool
Checking for
Component END
Test Completion
Component test process
Component test recording
BEGIN
- identities & versions of
Component component, test specification
Test Planning - actual outcome recorded &
compared to expected outcome
Component - discrepancies logged
Test Specification
- repeat test activities to establish
Component removal of the discrepancy
Test Execution (fault in test or verify fix)
- record coverage levels achieved
Component for test completion criteria
Test Recording
specified in test plan
Checking for
Component Sufficient
END to show test
Test Completion activities carried out
Component test process
BEGIN
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
Integration testing
in the small
■ more than one (tested) component
■ communication between components
■ what the set can perform that is not possible
individually
■ non-functional aspects if possible
■ integration strategy: big-bang vs incremental
(top-down, bottom-up, functional)
■ done by designers, analysts, or
independent testers
Big-Bang Integration
■ In theory:
- if we have already tested components why not just
combine them all at once? Wouldn’t this save time?
- (based on false assumption of no faults)
■ In practice:
- takes longer to locate and fix faults
- re-testing after fixes more extensive
- end result? takes more time
Incremental Integration
■ Baselines:
- baseline 0: component a a
- baseline 1: a + b
b c
- baseline 2: a + b + c
- baseline 3: a + b + c + d
- etc. d e f g
■ Need to call to lower h i j k l m
level components not
yet integrated
n o
■ Stubs: simulate missing
components
Stubs
■ Advantages:
- critical control structure tested first and most often
- can demonstrate system early (show working
menus)
■ Disadvantages:
- needs stubs
- detail left until last
- may be difficult to "see" detailed output (but should
have been tested in component test)
- may look more finished than it is
Bottom-up Integration
■ Baselines: a
- baseline 0: component n
- baseline 1: n + i b c
- baseline 2: n + i + o
- baseline 3: n + i + o + d d e f g
- etc.
h i j k l m
■ Needs drivers to call
the baseline configuration
n o
■ Also needs stubs
for some baselines
Drivers
■ Advantages:
- control level tested first and most often
- visibility of detail
- real working partial system earliest
■ Disadvantages
- needs stubs
Thread Integration
(also called functional)
■ order of processing some event
determines integration order a
■ interrupt, user transaction b c
■ minimum capability in time
■ advantages: d e f g
- critical processing first
h i j k l m
- early warning of
performance problems
n o
■ disadvantages:
- may need complex drivers and stubs
Integration Guidelines
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
System testing
■ last integration step
■ functional
- functional requirements and requirements-based
testing
- business process-based testing
■ non-functional
- as important as functional requirements
- often poorly specified
- must be tested
■ often done by independent test group
Functional system testing
■ Functional requirements
- a requirement that specifies a function that a system
or system component must perform (ANSI/IEEE
Std 729-1983, Software Engineering Terminology)
■ Functional specification
- the document that describes in detail the
characteristics of the product with regard to its
intended capability (BS 4778 Part 2, BS 7925-1)
Requirements-based testing
■ passwords
■ encryption
■ hardware permission devices
■ levels of access to information
■ authorisation
■ covert channels
■ physical security
Configuration and Installation
■ Configuration Tests
- different hardware or software environment
- configuration of the system itself
- upgrade paths - may conflict
■ Installation Tests
- distribution (CD, network, etc.) and timings
- physical aspects: electromagnetic fields, heat,
humidity, motion, chemicals, power supplies
- uninstall (removing installation)
Reliability / Qualities
■ Reliability
- "system will be reliable" - how to test this?
- "2 failures per year over ten years"
- Mean Time Between Failures (MTBF)
- reliability growth models
■ Other Qualities
- maintainability, portability, adaptability, etc.
Back-up and Recovery
■ Back-ups
- computer functions
- manual procedures (where are tapes stored)
■ Recovery
- real test of back-up
- manual procedures unfamiliar
- should be regularly rehearsed
- documentation should be detailed, clear and
thorough
Documentation Testing
■ Documentation review
- check for accuracy against other documents
- gain consensus about content
- documentation exists, in right format
■ Documentation tests
- is it usable? does it work?
- user manual
- maintenance documentation
Lifecycle
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
Integration testing in the large
■ Identify risks
- which areas missing or malfunctioning would be
most critical - test them first
■ “Divide and conquer”
- test the outside first (at the interface to your system,
e.g. test a package on its own)
- test the connections one at a time first
(your system and one other)
- combine incrementally - safer than “big bang”
(non-incremental)
Planning considerations
■ resources
- identify the resources that will be needed
(e.g. networks)
■ co-operation
- plan co-operation with other organisations
(e.g. suppliers, technical support team)
■ development plan
- integration (in the large) test plan could influence
development plan (e.g. conversion software needed
early on to exchange data formats)
Lifecycle
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
User acceptance testing
■ Users know:
- what really happens in business situations
- complexity of business relationships
- how users would do their work using the system
- variants to standard tasks (e.g. country-specific)
- examples of real cases
- how to identify sensible work-arounds
80% of function
by 20% of code
20% of function
by 80% of code
System testing
distributed over
this line
Contract acceptance testing
■ Alpha testing
- simulated or actual operational testing at an in-
house site not otherwise involved with the software
developers (i.e. developers’ site)
■ Beta testing
■ operational testing at a site not otherwise involved
with the software developers (i.e. testers’ site,
their own location)
Acceptance testing motto
4 5 6
Contents
Models for testing, economics of testing
High level test planning
Component Testing
Integration testing in the small
System testing (non-functional and functional)
Integration testing in the large
Acceptance testing
Maintenance testing
Maintenance testing
■ Alternatives
- the way the system works now must be right (except
for the specific change) - use existing system as the
baseline for regression tests
- look in user manuals or guides (if they exist)
- ask the experts - the current users
■ Without a specification, you cannot really test,
only explore. You can validate, but not verify.
Lifecycle
4 5 6