0% found this document useful (0 votes)
62 views

Basic Concepts of Testing-306792

The document provides an overview of different types of software testing including black box testing, white box testing, unit testing, integration testing, functional testing, system testing, end-to-end testing, and others. It also summarizes common software development models like the V-Model and Waterfall model. Finally, it outlines the typical phases of the software development life cycle including requirements gathering, design, implementation, testing, and release.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

Basic Concepts of Testing-306792

The document provides an overview of different types of software testing including black box testing, white box testing, unit testing, integration testing, functional testing, system testing, end-to-end testing, and others. It also summarizes common software development models like the V-Model and Waterfall model. Finally, it outlines the typical phases of the software development life cycle including requirements gathering, design, implementation, testing, and release.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 24

BASIC CONCEPTS OF

TESTING
SOFTWARE TESTING TYPES

 Black Box Testing- Internal system design is not considered in this type
of testing. Tests are based on requirements and functionality.
 White Box Testing- This testing is based on knowledge of the internal
logic of an application’s code. Also known as Glass box Testing. Internal
software and code working should be known for this type of testing. Tests
are based on coverage of code statements, branches, paths, conditions.
 Unit Testing- Testing of individual software components or modules.
Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. May
require developing test driver modules or test harnesses.
 Incremental Integration Testing- Bottom up approach for testing i.e
continuous testing of an application as new functionality is added;
Application functionality and modules should be independent enough to test
separately. done by programmers or by testers.
 Integration Testing- Testing of integrated modules to verify combined
functionality after integration. Modules are typically code modules, individual
applications, client and server applications on a network, etc. This type of
testing is especially relevant to client/server and distributed systems.
 Functional Testing- This type of testing ignores the internal parts and
focus on the output is as per requirement or not. Black-box type testing
geared to functional requirements of an application.
 System Testing- Entire system is tested as per the requirements. Black-
box type testing that is based on overall requirements specifications, covers
all combined parts of a system.
 End-to-End Testing- Similar to system testing, involves testing of a
complete application environment in a situation that mimics real-world use,
such as interacting with a database, using network communications, or
interacting with other hardware, applications, or systems if appropriate.
 Sanity Testing- Testing to determine if a new software version is performing well
enough to accept it for a major testing effort. If application is crashing for initial use
then system is not stable enough for further testing and build or application is
assigned to fix.
 Regression Testing- Testing the application as a whole for the modification in
any module or functionality. Difficult to cover all the system in regression testing so
typically automation tools are used for these testing types.
 Acceptance Testing- Normally this type of testing is done to verify if system
meets the customer specified requirements. User or customer do this testing to
determine whether to accept application.
 Load Testing- Its a performance testing to check system behavior under
load. Testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the system’s response
time degrades or fails.
 Stress Testing- System is stressed beyond its specifications to check
how and when it fails. Performed under heavy load like putting large number
beyond storage capacity, complex database queries, continuous input to
system or database load.
 Performance Testing- Term often used interchangeably with ’stress’
and ‘load’ testing. To check whether system meets performance
requirements. Used different performance and load tools to do this.
 Usability Testing- User-friendliness check. Application flow is tested,
Can new user understand the application easily, Proper help documented
whenever user stuck at any point. Basically system navigation is checked in
this testing.
 Install/Uninstall Testing- Tested for full, partial, or upgrade
install/uninstall processes on different operating systems under different
hardware, software environment.
 Recovery Testing- Testing how well a system recovers from crashes,
hardware failures, or other catastrophic problems.
 Security Testing- Can system be penetrated by any hacking way.
Testing how well the system protects against unauthorized internal or
external access. Checked if system, database is safe from external attacks.
 Compatibility Testing- Testing how well software performs in a
particular hardware/software/operating system/network environment and
different combination s of above.
 Comparison Testing- Comparison of product strengths and weaknesses
with previous versions or other similar products.
 Alpha Testing- In house virtual user environment can be created for this
type of testing. Testing is done at the end of development. Still minor design
changes may be made as a result of such testing.
 Beta Testing- Testing typically done by end-users or others. Final testing
before releasing application for commercial purpose.
MODELS OF SOFTWARE TESTING

V-Model
The V-model is a software development process which can be presumed to
be the extension of the waterfall model. Instead of moving down in a linear
way, the process steps are bent upwards after the coding phase, to form the
typical V shape. The V-Model demonstrates the relationships between each
phase of the development life cycle and its associated phase of testing.
The V-model deploys a well-structured method in which each phase can be
implemented by the detailed documentation of the previous phase. Testing
activities like test designing start at the beginning of the project well before
coding and therefore saves a huge amount of the project time.
Many of the process models currently used can be more generally
connected by the 'V' model where the 'V' describes the graphical
arrangement of the individual phases. The 'V' is also a synonym for
Verification and Validation.
By the ordering of activities in time sequence and with abstraction levels the
connection between development and test activities becomes clear.
Oppositely laying activities complement one another (i.e.) server as a base
for test activities. For example, the system test is carried out on the basis of
the results specification phase.
V-MODEL
WATERFALL MODEL

This is one of the first models of software development, presented by


B.W.Boehm. The Waterfall model is a step-by-step method of achieving
tasks. Using this model, one can get on to the next phase of development
activity only after completing the current phase. Also one can go back only
to the immediate previous phase.In Waterfall Model each phase of the
development activity is followed by the Verification and Validation activities.
One phase is completed with the testing activities, then the team proceeds
to the next phase. At any point of time, we can move only one step to the
immediate previous phase. For example, one cannot move from the Testing
phase to the Design phase.
SPIRAL MODEL

In the Spiral Model, a cyclical and prototyping view of software development


is shown. Test are explicitly mentioned (risk analysis, validation of
requirements and of the development) and the test phase is divided into
stages. The test activities include module, integration and acceptance tests.
However, in this model the testing also follows the coding. The exception to
this is that the test plan should be constructed after the design of the
system. The spiral model also identifies no activities associated with the
removal of defects.
SOFTWARE DEVELOPMENT LIFE CYCLE

 Project Planning
 Requirements Gathering – 1) Business Requirements
2) IT Requirements
 Design – 1) HLD
2) LLD
 Implementation (development) (unit testing)
 Testing – a) Integration Testing b) System Testing c) End-to-End testing d)
User Acceptance Testing
 Package & Release
 Business Requirements- Constitutes a specification of what the system
wants, this is usually expressed in terms of broad outcomes the business
requires rather than specific function that the system may perform.
 IT Requirements- Technical derivatives of Business requirements are IT
requirements. It describes what the system, product or process must do in
order to fulfill the business requirements. It will be more specific and
functional in terms.
 HLD- High level Design gives the overall System Design in terms of
Functional Architecture and Database design. This is very useful for the
developers to understand the flow of the system. In this phase design team
review team (testers) and customers plays a major role. For this the entry
criteria are the requirement document that is SRS. And the exit criteria will
be HLD projects standards the functional design documents and the
database design document.
 LLD- During the detailed phase the view of the application developed
during the high level design is broken down into modules and programs.
Logic design is done for every program and then documented as program
specifications. For every program a unit test plan is created.
The entry criteria for this will be the HLD document. And the exit criteria will
the program specification and unit test plan (LLD).
 Test Plan- is a document which talks about what will be tested by whom
and in which environment. It mainly defines the following-
a) Scope b) Approach c) Timelines d) Resources
 Test Strategy- The purpose of this document is to specify the overall
testing procedure to be carried forward for the particular project. It Contains
details about the various testing phases involved, testing methodologies
adopted, test environments, test timelines (period), various testing tools
used and the details of the deliverables.
 Test Case- a set of conditions under which a tester will determine
if the requirements of an application is partially or fully satisfied. A
Test case consists of several test steps.

 Defect- Failure of the application to meet the requirement.


 Test Log- contains the results of the testing being done.
 Traceability Matrix- defines the mapping between customer
requirements and test cases It is used by testing team to verify how
far the test cases prepared have covered the requirements
functionalities to be tested .A single requirement could be covered in
more than one test case and similarly a single test case can cover
multiple requirements.
Testing Life Cycle

 Understanding the requirements


 Preparing the test plan
 Designing the test cases
 Setting up the test environment
 Executing the test cases
 Preparing test logs
 Raising and tracking defects until closure and retesting the defects
after fix
Defect Life Cycle

Various States Of Defects


 New - State used when a new defect is first created.  Defect has not yet
been reviewed by the development team.
 Open - Once the development reviews the defect and starts working on it,
the defect will be moved to open state.
 In Review - Development is assessing the issue and may take some time
to investigate the defect. During this time the defect can be moved to In
Review state.
 Fixed - Once the defect is fixed by the developer, it is moved to the fixed
State.
 Built - Once the defect is fixed and if the corrected code is to be delivered
in the next build of the application then the defect can be moved to Built
state. In this case the expected Build date will be specified.
 Ready For Test - Once the defect is fixed and ready to be retested, then
it is moved to ready for test state.
 Rejected - If the defect is Invalid, then the defect is moved to Rejected
state.
 Closed - Once the defect is retested, and the issue for which the defect
was raised does not reoccur, then the defect can be moved to closed state.
 Reopen - If the defect is closed and the problem for which the defect was
raised occurs again then the defect can be reopened by moving it to Re-
open state.
 Deferred - Development has not yet scheduled a release time
frame. Defect will not be fixed in the foreseeable future. Defect will remain
open, because it needs to be addressed sometime in the life of the product.
Defect Types

 Bug - Software defects such as problems resulting in system crashes, data


corruption, memory leaks, functionality not coded to specification or
performance degraded to the point that the product is unusable.
 Gap - Missing or unclear requirements. They may be missing from or
unclear in BR’s, ITR’s or both.
 Enhancement - Suggestions for new or improved functionality or requests
to fix functional problems when the software which caused these problems
was coded to specification.
 Development - A special type of defect used to provide development with
a way to create and track defects discovered by development during the
development part of iteration. It should be used only for defects which
would otherwise be classified as a bug but will not be counted as a bug.
 Issue - Application problems which are identified during the non testing
phases of the application are reported as Issues. These application
problems are identified, recorded and are followed up until the problem is
resolved.
 Risk - Software problems which might potentially lead to a defect having
severe impact on the application and could hinder the total testing process
are classified as risks.
Severity Levels
Once the defect type is chosen, the next step while logging the defect will be
to select the Severity level. Severity level depends upon the defect type
chosen.
 Critical - Catastrophic defect causing a total failure of the software or
unrecoverable data loss. No workaround is available.
 Major - Defect resulting in severely impaired functionality. A workaround
may exist but its use is unsatisfactory.
 Average - Defect causes failure of non-critical aspects of the system.
There is a reasonably satisfactory workaround.
 Minor - Minor defect of little significance such as cosmetic.
 Blocking - Defect which prevents the testing functionality other than the
specific issue for which it is raised.
Turnaround Times
Turnaround time is the time taken to fix the defect. It depends on the
Severity level of the defect.
 Critical - Response time: 1 hr. Fix defect and make it ready for test within
24 hours.
 Major - Response time: 4 hrs. Fix defect and make it ready for test within
48 hours.
 Average - Response time: 24 hr. Fix defect and make it ready for test
within 5 work days.
 Minor - Response time: 48 hr. Fix defect and make it ready for test within
10 work days.
 Blocking - Response time: 1 hr. Fix defect and make it ready for test
within 24 hours.
THANK YOU

You might also like