An Excellent Compilation Of: Software Testing Concepts (Manual Testing)
An Excellent Compilation Of: Software Testing Concepts (Manual Testing)
By
Narsi Reddy
Published by
www.softwaretestinggenius.com
Pagel
Software Quality: -
Software should
i i Meet Customer requirements
i i Meet Customer expectations "QUALITY"
i i Cost to Purchase (Economical)
;., iv Time to Release (Timely Release of it)
Testing
Software
changes
SRS: - The SRS defines the functional requirements to be developed and the
system requirements to be used.
Reviews: - A document level testing technique during this review, the responsible
people are estimating the completeness & correctness of corresponding
document.
HLD:- The HLD documents defined the overall architecture ofthe system.
The above overall design is also known as Architectural Design I External Design.
Page3
LLD:- the LLD documents define the internal structure of every module or
Functionality
USER
Invalid
Valid
VERIFICATION VALIDATION
~
iiiiiiii iiiiiiiiiiiiiii iiiiiiiiiiiiiii iiiiiiii
~ RS/CRS/URS iiiiiiii
Testing
Techniq
't------~---~---::::o.......-'.----4:!nit Testing
Eg:
If(?) Condition
else
---
i. c=a; a=c+b;
Page 7
a=b; b=a-c;
b=c; c=b-a
More Memory usage for fast running Low memory usage for fast running
4. Mutation Testing:
During this test, the corresponding programmers estimate
completeness & correctness of a program testing.
Eg: Tes.ts.
N. Integration Testing:
After completion of dependent programs development & Unit
testing, the programmers interconnect them. Then the programmers verify the
interconnection of the programs in any one of the below four ways.
1. Top-Down Approach
2. Bottom-Up Approach
3. Hybrid Approach
4. System Approach
l.Top-Down Approach:
The interconnection of the main program & some sub-programs is called
the Top-Down Approach. Programmers use temporary programs called stubs
instead of sub-programs, which are under construction. The other name for stubs
is ''Called Programs". A stub returns the control to the main program.
Eg:
nder Construction)
*In the interconnection process is there any the sub-module is under construction
then the developers create temporary program Instead of sub modules that is called
"Stub".
2.Bottom - Up Approach:
The interconnection of internal sub-programs without using main
programs is called the bottom up approach. In this approach, programmers use a
temporary program instead of main program, which is under construction. The
temporary program is called 11Driver" or 11CalUng Program".
Eg:
(Under Construction)
3 .Hybrid Approach:
Also known as "Sandwich approach", this is a combination of the Process
Top-Down Approach & Bottom- roaches.
Eg:
(Under Construction)
DRIVER
STUB
(Under Construction)
4.System Approach:
It is also known as ''Big Bang Approach". From this approach, the
programmers interconnect programs after completion of all programs
development & unit Testing.
Build:
A finally integrated set of all programs is called a "Build" or AUT
(Application Under Testing).
5.System Testing: -
After completion of integration testing, a separate testing team receives a
software build from the development team. This team a set of block box testing
techniques to validate that software build the system testing is satisfied into 3
categories.
Page 10
I. Usability testing
2. Functional Testing
3. Non- Functional Testing
!.Usability Testing:
In general, the separate testing team starts test execution with usability
testing. During this test, the team concentrates on user-friendliness of the software
build screens. The usability testing consists of 2 sub tests.
NOTE: In general, the testing team conducts User- interface testing & then
conducts functional & non-Functional Tests. All the end of testing
process, the testing team concentrates on Manuals Support Testing
!
User Interface Testing
!
Functional & Non- Functional Testing
!
Manuals Support Testing
2. Functional Testing:
Page 11
A Moderator testing level during which the testing team
concentrates on customer requirements interms offunctionality. During this test,
the testing team applies below sub-tests on the software build.
i) Functionality Testing
ii) Sanitation Testing
i) Functionality Testing: -
During this test, the testing team concentrates on correctness of
every functionality with respect to requirements. In this test, the testing team
follows the below coverage.
...:::Manipulations Coverage
(Returning correct output)
3. Non-Functionality Testing:
A complex level in system testing during which the testing team
concentrates on extra characteristics of the software.
i. Recovery Testing
ii. Compatibility Testing
iii. Configuration Testing
1v. Inter system Testing
v. Installation Testing
vi. Load Testing
vii. Stress Testing
viii. Data Volume Testing
Page 12
1x. Parallel Testing
i. Recovery Testing: -
It is also known as "Reliability Testing". During this testing
team validates that whether the software build changes from abnormal mode to
normal mode.
......__.....,. (Abnormal)
Normal
ii) Compatibility Testing: -
Also Known as "Portability Testing". During this test, the testing
team validates whether the software build is running on customer expected
platforms or not?
Platforms are Operating Systems, Compilers, Browsers & Other
system software.
Accounts S/W c
Sharing of Resources
Page 13
Loans S/W
Front-end Backend
v) Installation Testing:-
Eg:
SERVER
D Client2
*How make time is taken by the server to respond to each of the clients.
vii) Stress Testing: -
The execution of our software build in customer expected configured
environment under various levels of load to estimate reliability is called natress
testing".
Connectivity Level
Eg:
Page 14
SERVER Client 3
Eg:
Account Software
A/C S/W
===--------==
- Teating I ® • Teating
1. By real customers I 1. By Model Customers
2. In development site 2. In Model Customer site
3. Suitable for Applications 3. Suitable for Products
a) Port Testing: -
The corresponding release team conducts port testing on the
customer site. During this test, the release team observes the below factors.
V Compact Installation
V Overall Functionality
V Input device handling
V Output device handling (Like Monitor, Printer, etc)
V Secondary storage device handling (Like Floppy disk, CD-Rom, etc)
V Operating System with other handling
V Co - Execution with other software.
!
Enhancement
l
Missed Defects
Impact .Apalysis Impact ~nalysis
Perform $oftware Changes Perform Software $ anges
! !
! !
Page 16
7. Ad-hoc Testing: -
In general, every testing team conducts planned testing, but testing team
adopts informal testing sometimes due to some challenges or risks.
Eg: Lack of time, lack of resources, lack of team size, lack of skill, etc.
This informal testing is also known as Ad-hoc testing. There are
different styles in Ad-hoc testing.
a) Monkey Testing
Page 17
b) Buddy Testing
c) Exploratory Testing
d) Pair Testing
e) Defect Seeding I Debugging
a) Monkey Testing: -
Due to lack of time, the testing team concentrates on some of the
main activities in the software build for testing. This style of testing is known as
''Monkey testing" or "Chimpanzee testing" or ''Gorilla testing".
b) Buddy Testing:-
Due to lack of time, the management groups programmers & testers
as "Buddies". Every buddy group consists of programmers & testers.
Eg: 1:1 (or) 2:1 (or) 3:1 (preferable)
c) Exploratory Testing: -
Due to lack of proper documentation of the software being built, the
test engineers depend on past experience, discuss with others, browse the Internet
or Operate similar projects and contact customer side people if possible.
This style of testing is called "Exploratory Testing".
d) Pair Testing: -
Due to lack of knowledge on project domain the management groups
a senior tester & a Junior Programmers are developed and conducted testing, these
all are called Pair testing.
e) Defect Sending: -
To estimate the efficiency of test engineers, the programmers add
some bugs to the build. This task is called defect seeding I debugging.
Testing Terminology: -
1. Test Strategy
2. Test Plan
3. Test Case
4. Test Log
5. Error, Defect & Bug
6. Summary Report
7. Test Bed
Page 18
8. Test Suite
9. Testing Policy
10. Testing Process
11. Testing Standard
12. Testing Measurement
1. Test Strategy: -
It is a Company level document and developed by Quality Analyst. A
testing strategy defines Testing approach followed by testing team
(or)
(An approach to be followed by testing team in testing a build).
2. Test Plan: -
A schedule to be followed by testing team in testing.
3. Test Case: -
A test condition to be applied on software build.
6. Summary Report: -
Defines work progress
Eg: Daily reports, weekly reports and Monthly report.
7. Test Bed: -
Total Testing, information and test environment is called as test bed.
8. Test Suite: -
All the combination of all the different test cases is called as test
suite.
9. Testing Policy: -
It is a Company level Document and developed by Quality Control
Document and developed by quality control people (almost Management). The
testing policy defines Testing Objective.
Page 19
NOTE: The other name for test case document is functional test plan
Tests
Failed Tests
Defect Report
Failed - - - - - - - - • Developers
Testing Process: -
~ ~ H- ~
Test Test Test Test Test
Initiation Planning Design Execution Closure
Test
Reporting
Page 20
Analysis (SRS)
1
t
'
De~· gn (HLD, LLD 's)
Co ng & Unit Testing
Int ration Testing
Test Initiation
Test Planning
Test esign
l l
Test Execution Test Reportin
Test losure
I. Test Initiation: -
In general, the system testing process starts with test initiation. In
this stage, the project Manager category people selects reasonable tests to be
applied. After selecting reasonable tests, the manager prepares "Test Strategy
Document" also known as "Test Methodology".
64% ~6%
Development Testing
&
Maintenance
10. Risks & Assumptions: A list of analyzed risks & solutions to overcome.
11. Training Plan: The required number of training issue or a testing topic there
are 15 topics as maximum to define quality software.
Test Factors:
Test factor means a testing issue or a testing topic. There are 15
topics as maximum to define quality software.
1. Authorization: software, which allows valid users & prevents invalid users.
2. Access Control: Authorities of valid users to use specific functionality
3. Audit Trail: Maintains metadata about user operations.
4. Data Integrity: Taking correct size &
5. Correctness: Returning correct outputs
6. Continuity of processing: integration of internal functionalities.
7.Coupling: Co -Existence with other softwares to share common resources
8 .Ease of use: User- friendly screens
9.Ease of Operations: Installation, un-installation, downloading
10.Portable: Run on different platforms.
1l.Performance: Speed of processing
12.Reliability: Recovery from abnormal situations
13.Service levels: Order offunctionalities to service to customer
14.Maintainable: Serviceable to customers' long time.
15.Methodology: Whether the testing team is following Quality standards or not
while testing.
Case Study# 1
Case Study# 2
In the above example nine factors are finalized to be applied in the system
testing of a project.
Development
Documents (BRS, SRS
Identify Risks
a) Team Formation: -
Page24
b) Identify Risks: -
After completion of reasonable testing team formation, the test lead
concentrates on risks at the team level.
Eg:
Risk 1: Lack of knowledge of testing team on domain.
Risk 2: Lack of time
Risk 3: Lack of resources
Risk 4: Lack of documentation
Risk 5: Delays in delivery
Risk 6: Lack of rigorous development process
Risk 7: lack of Communication
Format:
1. Test Plan ID: The identification no. or name.
2. Instructions: About the project
3. Test Items: Name of modules or functionalities or services.
4. Feature to be tested: The names of modules which are selected for
testing
Eg: a, b, c Modules are all to be tested.
5. Features not to be tested: The names of remaining tested modules.
Eg: Vl %o V2 = Vl +Some extra modules (These are to be tested)
6. Approach: The selected list of selecting techniques with respect to test
Strategy (selected by Project Manager)
7. Test Environment: Required Hardware & Software to apply the selected
test on specified features.
Eg: a, b, c, d
cd ----------%o UI ST CI
(Features) MI LT CI
Page25
FI
8. Entry criteria:
%o Prepare complete & correctness
%o Establish Test environment
%o Receive stable build from developers.
9. Suspension criteria:
%o Test environment is nor supporting
%o Show stopper defect occurred (without resolving the
problem, we cannot start testing)
%o Pending defects are more (Quality gap)
10. Exit Criteria:
%o All modules tested
%o Meet the duration
%o All major bugs resolved
11. Test Deliverables: The names of test documents to be prepared by test
Engineers.
Eg:
%o Test Scenarios
%o Test Case Documents
%o Test logs
%o Defect Reports
%o Summary reports etc.
12. Staff & Training needs: The names of selected test engineers and
required training sessions for them.
t
%o Requirements oriented review
%o Testing techniques oriented review
%o Risks oriented review
Page26
BRS
l
SRS
Prepare
Test Cases
l
HLD &LLD's
Functional Specification 1: ·
A login process allows user ID & password to authorize users. From
customer requirements user ID takes 9-numarics in lower case from 4 to 16
characters long. The password object takes alphabets in lower case from 4 to 8
characters long. Prepare test case titles or scenario.
Functional specification 2:
In an insurance application, can apply for different types of policies.
From customer requirements, the system asks age, when a user selects type
insurance. The age value should be >17 years and should be <60 years.
Functional Specification 3: -
In a shopping application, users can apply for p items purchase.
From customer requirements, the system allows users to select itern no. and entry
of quantity upto 10 items. System returns the price of each item and total amount
with respect to given quantity.
Test case title 3: verify total =price of one item X given quantity.
Functional specification 4: -
A door opens when a person comes in front of the door & closes when the
person comes inside.
Pass
Functional Specification 5: -
Page30
Test case title 1: verify if all the windows are closed when shutting down.
Test case title 2: verify shutdown option selection using Alt+F4.
Test case title 3: verify shutdown option selection using run command.
Test case title 4: verify shutdown operation.
Test case title 5: verify shutdown operation when a process is running.
Test case title 6: verify shutdown operation using power off button.
Test case title 2: Verify the prefix. (doesn't start with 0 & 1)
Functional Specification 8: -
Money withdrawal from ATM with all rules & regulations.
Functional Specification 9: ·
After validation of the above fields, the system returns user ID. This
user ID will be in the below format.
Page35
Special Characters
Blank field
Etc ........ .
11. Test case pass or Fail criteria: - The final result of test case after execution.
Note:-
1. In general, the test engineers fill only some of the fields due to repetition
offield values & time.
2. If the test case is going to cover an abject, test engineers prepare data
matrix. If the test case is regarding an operation or functionality then
test engineers prepare test procedure.
a-z A-Z 4 16
User Id Characters Characters
Page 38
Document 2 : -
Format:-
1. Test case ID: - unique no. or name for future reference.
2. Test case Name: - the title of test case
3. Feature:- Name of corresponding module
4. Test Suite ID:- The name of test batch, this case is a member in that batch.
5. Priority:- The importance of this particular test case in terms of
functionality. (This concept was first introduced in CTS,
Chennai)
P 0 - Basic functionality test cases
P 1 - General functionality test cases
P2 - Cosmetic functionality test cases
6. Test Environment:- Required Hardware & Software to execute this case
7. Test Effort: - Person I Hour (20 min is the average time to execute one test
case as per ISO, CMM standards). Time taken by a person to
execute the case.
8. Test Duration: -Date and time to execute this case on build
9. Test Setup or pre-Condition: - The necessary tasks to do before the
starting the execution of this case on the build.
10. Test Procedure or Data Metrics:-
~-----------~~~----------·-'~ ~~----------~~~------------~
Filled the text in design phase Filled while text execution
Data Matrix: -
Page39
Document 1:
Document 2:
Document 3:-
1. Test case ID:- TC_FD Thiru_06'h Jan_3
2. Test case Name: -Verify Tenure
3. Test Suite Id: - TS_FD
4. Priority: Po
Page41
Document 5:-
Document 6: -
BRS
Use Cases I
Page43
Coding (Build)
After completion of all reasonable use cases development with complete &
correct information, the separate testing team concentrates on test case selection
& documentation.
NOTE:
Page44
1. The functional & System specification based test case design is suitable
for general organizations because subject exploring is possible in
general organizations. The use cases based test case design is suitable
for testing outsourcing companies because subject exploring is critical
for them.
IV Test Execution:-
After completion of test design & review, the testing team conducts a
formal meeting with development team. In this meeting, the development & testing
teams concentrate on.
SERVER
Testing
Documents
Development
Environment Customer
Environment
Testing Environment
From the above model, the testing people downloads build from common
repository in server with permissions. In the common repository development
people maintains old build coding & modified build coding. To distinguished old
build & modified build, the development team assigns unique version no. s to that
Page46
builds. For this build version control, the development team uses version control
tools. (VSS- Visual Source Safe) Development team will send release note to
testing team for every modified build. This release note provides information
about changes in build.
Development Testing
'Cc::::;;=--~f-~.~
lnu.it~ia~l.....
B~u.u.ild"::~'----<>t> L vel 0 (Sanity)
(Stable Build)
~===--------===--Levell(Comprehensive/Real
(Defect Report) Testing)
Resolving '..IC==-~
(M
~o""d.u:~~fie~d"-'BI.#-z~ulo4o&·ld"")-=~> Lev 12 (Regression Testing)
NOTE:
1. PO priority Test cases indicate functionality testing, P1 priority test
cases indicate non-functional testing & P2 test cases indicate usability
testing.
2. Level- 0 (Sanity) on initial build
Level-l(Comprehensive) on suitable build.
Level- 2(Regression) on modified build
Level- 3(Final Regression/Post Mortem) on Master build.
(User-Acceptance Testing) on Golden build.
Page47
%o Understandability
%o Simplicity
%o Operatability
%o Observability
%o Consistency
%o Maintainable
%o Automatable (Optional)
%o Controllable
Btild Btild
). ~
%o Passed --- all expected values in test case are equal to actual values of build.
%o Failed--- anyone expected is varying with actual values of build.
%o Blocked- Test case execution postponed due to incorrect failed functionality
Test engineers prepare the above entries in test log for every test
case in every test batch.
Skip Passed
Arrange
Test Cases Execution
as Manual/
Test Batches Failed
Blocked
Closed
Partial
Pass I Fail
(or) Warning
Level- 0
Page49
Level3
Case 1:
If the defect resolved by the development team is of high severity, the test
engineers re-execute all P 1 & carefully selected P2 Test case. On that modified
build with respect to modification specified in release note.
Case 2:
If the defect resolved by the development team is of medium severity, then
test engineers. Re-execute all PO, carefully selected P1 & some of P2 Test cases
with respect to modification specified in release note.
Case 3:
If the defect resolved by the development team is low severity, then the test
engineers re-execute some PO, some P1 & Some P2 Test cases on that modified
build with respect to modifications specified in release note.
Case 4:
Page 50
In this level2 Regression Testing, the testing people user two indicators for
modified build: Check -In & Check - Out.
N. Test Reporting:
During level 1 & level 2 test execution, test engineers report
mismatches to the development team. In this reporting test engineers uses the
format similar to the one given below.
Format:
NOTE: In the above format, test engineers fill all fields, but the "Suggestedfu"
field is optional & "Priority" field is modifiable by developers.
Defect Submission Process: -
Project
Manger
Team Lead
Test Lead
Programmer
Test Engineer
Project
Manger
NEW
Open Deferred
New Closed
\. /1
+
Defect Age
/).
' Deferred
Page 53
** Defect Density: -
The average no. of defects found in a module or function is called
defect density.