Module 4 New
Module 4 New
1
Chapter 1: Software Testing
Topics covered
Development testing
Test-driven development
Release testing
User testing
Test Automation
2
Program testing
Testing is intended
to show that a program does what it is intended to do
and
to discover program defects before it is put into use.
We can test the software by executing a program using
artificial data.
3
Program testing goals
The testing process has two distinct goals:
1. To demonstrate to the developer and the customer that the
software meets its requirements.
2. To discover faults or defects in the software where its behavior is
incorrect.
Validation and Defect testing
The first goal leads to validation testing
We expect the system to perform correctly using a given set of test
cases.
A successful test shows that the system operates as intended.
The second goal leads to defect testing
A successful test is a test that makes the system perform incorrectly
and so exposes a defect in the system.
4
Difference between validation and defect testing is shown in
figure below.
Think of the system being tested as a black box.
5
Verification vs validation
Testing is part of a more general V&V process.
Verification: "Are we building the product right”.
• The software should conform to its specification.
Validation: "Are we building the right product”.
• The software should do what the user really requires.
V & V confidence
Its main goal is to establish confidence that the system is ‘fit for
purpose’ (i.e. good enough for its intended use).
The level of confidence required depends on the following
1. Software purpose (depends on how critical the software is)
2. User expectations (may have low expectations of software
quality) (later versions must be more reliable)
3. Marketing environment
• Getting a product to market early may be more important
than finding defects in the program.
6
Inspections and testing
Software inspections (static verification)
Don’t need to execute the software to verify it.
Software testing (dynamic verification)
The system is executed with test data and its operational behaviour
is observed.
Fig below shows that software inspections and testing supports
V&V at different stages in the software process.
7
Advantages of inspections over testing
1. During testing, errors can mask (hide) other errors. Because
inspection is a static process, there will be no interactions
between errors.
2. Incomplete versions of a system can be inspected without
additional costs. If a program is incomplete, then you need to
develop specialized test cases to test the parts that are available.
3. As well as searching for program defects, an inspection also
consider broader quality attributes of a program, such as
compliance with standards, portability and maintainability.
Inspections and testing are complementary and not opposing
verification techniques.
Both should be used during the V & V process.
8
An abstract model of the traditional software testing process
Test cases are specifications of the inputs to the test and the expected
output from the system.
Test data are the inputs that have been planed to test a system.
10
1 Development testing
It includes all testing activities that are carried out by the team
developing the system.
It is carried out at 3 levels
1. Unit testing :
• Individual program units or object classes are tested.
• Focus on testing the functionality of objects or methods.
2. Component testing :
• Several individual units are integrated to create composite
components.
• Focus on testing component interfaces.
3. System testing :
• Components are integrated and the system is tested as a whole.
• Focus on testing component interactions.
11
1. 1 Unit testing
Unit testing is the process of testing individual components in
isolation.
It is a defect testing process.
Units may be:
Individual functions or methods within an object
Object classes with several attributes and methods
12
Contd...
Object class testing
Design the test to provide coverage of all the features of object. That
involves
1. Set and check value of all the attributes of object
2. Testing all operations associated with an object
3. Exercising the object in all possible states (i.e. simulate all events
that cause a state change).
Ex: Weather station object
Its interface is shown in the figure below.
1. It has a single attribute, which is a constant
2. Need to define test cases for all methods such as reportWeather, reportStatus, etc.
(need to test them in isolation).
13
Contd…
3. States of the weather station object are tested by using a state
model which identify sequences of state transitions to be tested
and the event sequences to force these transitions.
For example: State transition sequences
Shutdown -> Running-> Shutdown
Configuring-> Running-> Testing -> Transmitting -> Running
Running-> Collecting-> Running-> Summarizing ->
Transmitting -> Running
14
Contd…
Automated testing
Whenever possible, unit testing should be automated.
In this, test automation framework is used to write and run
program tests.
Entire tests group can run in few seconds, it is possible to execute
all the tests every time we make a change to the program.
It has 3 parts
1. A setup part : Initialize the system with the test case, i.e., inputs
and expected outputs.
2. A call part : Call the object or method to be tested.
3. An assertion (declaration) part : Compare the result of the call
with the expected result. If the assertion evaluates to true, the test
has been successful if false, then it has failed.
15
1.1.1 choosing unit test cases
Unit test effectiveness:
Testing is expensive and time consuming, so that we choose
effective unit test cases. Effective means;
1. The test cases should show that, the component that are testing
does what it is supposed to do.
2. If there are defects in the component, these should be revealed
by test cases.
So we should write 2 types of unit test case:
The first one should show that the component works as
expected.
The another one should be based on testing experience of
where common problems arise. It should use abnormal inputs
to check that these are properly processed and do not crash the
16
component.
Contd…
Possible strategies helping to choose test cases
1. Partition testing : Identifies the groups of inputs that have
common characteristics and should be processed in the same
way.
Then we can choose the test cases from within each of these groups.
2. Guideline-based testing : where you use testing guidelines to
choose test cases.
These guidelines reflect previous experience of the kinds of errors that
programmers often make when developing components.
17
Contd…
1. Partition testing :
Input data and output results of a program often fall into different
classes with common characteristics.
Examples: Positive numbers, Negative numbers, etc.
Each of these classes is an equivalence partition where the
program behaves in an equivalent way for each class member.
Test cases should be chosen from each partition.
18
Contd…
In fig below, the large shaded ellipse on the left represents the set of all
possible inputs to the program that is being tested.
Output equivalence partitions are partitions within which all of the
outputs have something in common.
Equivalence partitioning
19
Contd…
Once the set of partitions identified, test cases are chosen from each of
these partitions.
A good rule is to choose test case on the boundaries plus close to the
midpoint of the partitions.
20
Contd…
Example of equivalence partitioning is as shown in fig below
Partitions can be identified by using the program specification and from
experience where you predict input value that are likely to detect errors.
For example, say a program specification states that the program accepts
4 to 10 inputs which are five-digit integers greater than 10,000.
Equivalence partitions
21
Contd...
2. Guideline-based testing :
Guidelines tells which test cases are effective for discovering
errors.
For example: guidelines for testing programs with sequences
1. Test with sequence have only a single value.
2. Use sequences of different sizes in different tests.
3. Derive tests so that the first, middle and last elements of the sequence are
accessed.
22
Contd…
General guidelines suggested by Whittaker’s book (2002)
1. Choose inputs that force the system to generate all error
messages
2. Design inputs that cause input buffers to overflow
3. Repeat the same input or series of inputs numerous times
4. Force invalid outputs to be generated
5. Force computation results to be too large or too small.
23
1.2 Component testing
Software components are made up of several interacting objects.
The functionality of these objects can be accessed through the
interface.
Component testing focus on showing that the component interface
behaves according to its specification.
Component interface testing illustration
Interface errors result from interactions between the objects.
24
contd...
Different types of Interface between program components
1. Parameter interfaces
2. Shared memory interfaces (Block of memory is shared)
3. Procedural interfaces
4. Message passing interfaces
25
Contd...
Different types of Interface errors
Interface misuse
A calling component makes an error in the use of another
component interface e.g. parameters in the wrong order.
Interface misunderstanding
A calling component makes assumptions about the behaviour
of the called component which are incorrect.
For eg. Binary search method called with an unsorted array.
Timing errors
The called and the calling component (P-C) operate at
different speeds and out-of-date information is accessed.
26
Contd...
General guidelines for Interface testing
1. Design tests in which the values of parameters are at the
extreme ends of their ranges.
2. Always test the interface with null pointer parameters.
3. Intentionally cause the component to fail.
4. Use stress testing in message passing systems.
Where several components interact through shared memory,
design tests that vary the order in which these components are
activated.
27
1.3 System testing
components are integrated and then testing the system as a whole.
It checks that the components are
compatible,
interact correctly and
transfer the right data at the right time across their interfaces.
Important differences with component testing
1. Reusable components may be integrated with newly developed
components.
2. Components developed by different team members may be
integrated (separate testing team)
28
Contd…
Use-case testing
As system testing focuses on testing interactions between
components, use-case based testing is an effective approach to
system testing.
The use-cases developed to identify system interactions.
The sequence diagrams developed to identify components and
interactions that are being tested.
This is illustrated in the example given below.
Whether station is requested to report summarized whether data
weather station performs sequence of operations when it
responds to a request to collect data
29
Contd…
Fig. Collect weather data sequence chart
Sequence diagram is used to identify operations that will be tested.
SatComms:request->WeatherStation:reportWeather->
Commslink:Get(summary)-> WeatherData:summarize
30
Contd…
The sequence diagram helps to design test cases which shows
what inputs are required and what outputs are created
31
2 Test-driven development (TDD)
TDD is an approach to program development in which you inter-
leave testing and code development.
Tests are written before code
The code will be developed incrementally along with a test.
We don’t move to the next increment until the previous increment
passes its test.
32
Contd…
The fundamental TDD process steps shown below (ref. fig)
1. Start by identifying the increment of functionality required.
2. Write a test for this functionality
3. Then run this test along with all other tests that have been
implemented.
4. Then implement the functionality and re-run the test.
5. Then we move to the next increment.
33
Fig. Test-driven development
Contd…
Benefits of TDD
1. Code coverage
• Every code segment has at least one test so all the code in the
system has been executed.
2. Regression testing
• A test suite is developed incrementally as a program is
developed.
3. Simplified debugging
• When a test fails, it should be obvious (clear) where the
problem lies.
4. System documentation
• The tests themselves are a form of documentation
34
3 Release testing
It is the process of testing a particular release of a system that is
intended for use outside of the development team.
It shows that the system delivers its
• specified functionality,
• performance and
• that it does not fail during normal use.
Release testing is usually a black-box testing process
Also called as functional testing
Distinctions between Release testing and system testing
1. A separate team should be responsible for release testing.
2. System testing is a defect testing and Release testing is a validation
testing.
35
3.1 Requirements based testing
All the requirements should be testable
Requirements-based testing is an approach to design test cases for
each requirement.
It will demonstrate that the system has properly implemented its
requirements.
Example
MHC-PMS requirements concerned with checking for drug
allergies:
If a patient is known to be allergic to any particular medication,
then prescription of that medication shall result in a warning
message being issued to the system user.
If a prescriber chooses to ignore an allergy warning, they shall
provide a reason why this has been ignored.
36
Contd…
To check these requirements, need to develop several tests:
1. Set up a patient record with no known allergies. Prescribe
medication (no warning message)
2. Set up a patient record with a known allergy. Prescribe
the medication (warning message)
3. Set up a patient record in which allergies to two or more
drugs are recorded. ( Correct warning for each drug)
4. Prescribe two drugs that the patient is allergic to. (two
correct warnings)
5. Prescribe a drug that issues a warning and ignore that
warning. The system should asks the user to give
explanation.
37
3.2 Scenario testing
A typical scenarios of use are used to develop the test cases.
A scenario is a story that describes one way in which the system
might be used.
If scenarios were used for discovering the requirements, then we
can reuse them as testing scenarios.
Scenario testing should motivate stakeholders to relate themselves
to the scenario
Example (refer figure below)
A possible scenario from the MHC-PMS, describes the way in
which the system may be used on a home visit.
38
A usage scenario for the MHC-PMS
Kate is a nurse who specializes in mental health care. One of her
responsibilities is to visit patients at home to check that their treatment is effective
and that they are not suffering from medication side -effects.
On a day for home visits, Kate logs into the MHC-PMS and She requests
that the records for these patients be downloaded to her laptop. She is prompted for
her key phrase to encrypt the records on the laptop.
One of the patients that she visits is Jim, who is being treated with
medication for depression. Jim feels that the medication is helping him but believes
that it has the side-effect of keeping him awake at night. Kate looks up Jim’s record
and is prompted for her key phrase to decrypt the record. She checks the drug
prescribed and queries its side effects. Sleeplessness is a known side effect so she
notes the problem in Jim’s record and suggests that he visits the clinic to have his
medication changed. He agrees so Kate enters a prompt to call him when she gets
back to the clinic to make an appointment with a physician. She ends the consultation
and the system re-encrypts Jim’s record.
After, finishing her consultations, Kate returns to the clinic and uploads the
records of patients visited to the database. The system generates a call list for Kate of
those patients who she has to contact for follow-up information and make clinic
39
appointments.
3.2 Scenario testing
Features tested by the above scenario
1. Authentication by logging on to the system.
2. Downloading and uploading of specified patient records to a
laptop.
3. Home visit scheduling.
4. Encryption and decryption of patient records on a mobile
device.
5. Record retrieval and modification.
6. Links with the drugs database that maintains side-effect
information.
7. The system for call prompting.
40
3.3. Performance testing
It is designed to ensure that the system can process its intended
load.
It usually involve running a series of tests where the load is
steadily increased until the system performance becomes
unacceptable.
It needs to construct an operational profile which is a set of tests
that reflect the mix of work that will be handled by the system.
Therefore, if 90% of the transactions in a system are of type A; 5%
of type B; and the remainder of types C, D, and E,
Then you have to design the operational profile so that the vast
majority of tests are of type A.
In this testing, the system is stressed by deliberately overloaded for
testing its failure behaviour. This is known as ‘stress testing’.
41
Contd…
It has two functions:
1. It tests the failure behavior of the system. In case of overloading,
the system failure should not cause data corruption or loss of
user services. ( ‘fail-soft’ rather than collapse)
2. It stresses the system that may detect the defects that would not
normally be discovered.
42
4. User testing
A testing process in which users or customers provide input and advice
on system testing.
User testing is essential, because the influences from the user’s working
environment have a major effect on the reliability, performance,
usability and robustness of a system.
Types of user testing
1. Alpha testing
Users work with the development team to test the software at the
developer’s site.
2. Beta testing
A release of the software is made available to users to experiment and
to raise problems that they discover with the system developers.
It will discover interaction problems between the software and
features of the environment where it is used.
3. Acceptance testing
Customers test a system to decide whether or not it is ready to be
accepted from the system developers and deployed in the customer
43
environment.
Contd…
44
Contd…
Six stages in the acceptance testing process
1. Define acceptance criteria:
It should be done before the contract is signed (between customer and developer).
2. Plan acceptance testing :
It involves deciding on the resources, time, and budget for acceptance testing and
establishing a testing schedule.
3. Derive acceptance tests :
• Design the tests to check whether or not a system is acceptable. These tests should test
both the functional and non-functional characteristics (performance)
4. Run acceptance tests :
• user testing environment have to be setup in developers site to run these tests on the
system.
5. Negotiate test results :
If all the acceptance tests pass, then no problems in the system and the system can be handed over. More
commonly, some problems will be discovered. So developer and customer have to negotiate to decide if the
system is good enough to be put into use.
6. Reject/accept system :
the developers and the customer meet to decide on whether or not the system should be accepted.
45