0% found this document useful (0 votes)
18 views

Unit Iv

This document discusses different types of software testing strategies and techniques including black box testing, white box testing, integration testing, validation testing, system testing, and loop testing. It also covers topics like software quality, product metrics, and measure.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Unit Iv

This document discusses different types of software testing strategies and techniques including black box testing, white box testing, integration testing, validation testing, system testing, and loop testing. It also covers topics like software quality, product metrics, and measure.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

A strategic Approach for Software testing:

Software Testing is one of the important phases of software development


Testing is the process of execution of a program with the intention of finding errors Involves
40% of total project cost.

Testing strategy: It is a road map that incorporates test planning, test case design, test execution
and resultant data collection and execution.

Testing Strategies for Conventional Software


1. Unit Testing
2. Integration Testing
3. Validation Testing and
4. System Testing

Software Testing:
• Two major categories of software testing
1. Black box testing
2. White box testing

Black box testing:


It treats the system as black box whose behavior can be determined by studying its input and its
related output not concerned with the internal structure of the program. It focuses on the
functional requirements of the software i.e. it enables the s/w engineer to derive a set of input
conditions that fully exercise all the functional requirements for that program. It is concerned
with functionality and implementation. Black box testing is also called as Functional Testing.

Types of Black box Testing:-


Black box testing can be applied to three main types of tests:

1. Functional,
2. Non-functional, and
3. Regression testing.

Functional Testing: Black box testing can test specific functions or features of the software
under test. For example, checking that it is possible to log in using correct user credentials, and
not possible to log in using wrong credentials.
Non-Functional Testing: Black box testing can check additional aspects of the software, beyond
features and functionality. A non-functional test does not check “if” the software can perform a
specific action but “how” it performs that action.

Regression Testing: Black box testing can be used to check if a new version of the software
exhibits a regression, or degradation in capabilities, from one version to the next. Regression
testing can be applied to functional aspects of the software (for example, a specific feature no
longer works as expected in the new version), or non-functional aspects (for example, an
operation that performed well is very slow in the new version).

Boundary Value Analysis: Testers can identify that a system has a special response around a
specific boundary value. For example, a specific field may accept only values between 0 and 99.
Testers can focus on the boundary values (-1, 0, 99 and 100), to see if the system is accepting
and rejecting inputs correctly.

White Box testing:


It is also called glass box testing. It involves knowing the internal working of a program. It
guarantees that all independent paths will be exercised at least once. It Exercises on all logical
decisions and their true and false sides. Executes all loops exercises all data structures for their
validity. White box testing techniques are the basis path testing and Control structure testing. It
is also called as Structural Testing.

By combining black box and white box testing, testers can achieve a comprehensive “inside and
outside” inspection of a software application and increase coverage of quality and security
issues. It is called Grey Box Testing.

Condition Testing: It Exercises the logical conditions contained in a program module. It also
focuses on testing each condition in the program to ensure that it doesn’t contain errors.

Data flow Testing: Selects test paths according to the locations of definitions and use of variables
in a program and aims to ensure that the definitions of variables and subsequent and its use is
tested.

Loop Testing: It is a type of software testing type that is performed to validate the loops. It is
one of the types of Control Structure Testing. Loop testing is a white box testing technique and is
used to test loops in the program. It focuses on the validity of loop constructs four categories can
be defined
1. Simple loops
2. Nested loops
3. Concatenated loops and
4. Unstructured loops

1. Simple loop Testing: Testing performed in a simple loop is known as Simple loop testing.
Simple loop is basically a normal “for”, “while” or “do-while” in which a condition is
given and loop runs and terminates according to true and false occurrence of the
condition respectively. This type of testing is performed basically to test the condition of
the loop whether the condition is sufficient to terminate loop after some point of time.

2. Nested Loop Testing: Testing performed in a nested loop in known as Nested loop
testing. Nested loop is basically one loop inside the loop. In nested loop there can be
finite number of loops inside a loop and there a nest is made. It may be either of any of
three loops i.e., for, while or do-while.
Example:

While (condition 1)
{
While (condition 2)
{
statement(s);
}
}

3. Concatenated loops Testing: Testing performed in a concatenated loop is known as


Concatenated loop testing. It is performed on the concatenated loops. Concatenated loops
are loops after the loop. It is a series of loops. Difference between nested and
concatenated is that in nested loop is inside the loop but here loop is after the loop.
Example:
while(condition 1)
{
statement(s);
}
while(condition 2)
{
statement(s);
}
4. Unstructured loops testing: Testing performed in an unstructured loop is known as
unstructured loop testing. Unstructured loop is the combination of nested and
concatenated loops. It is basically a group of loops that are in no order.
Example:
while()
{
for()
{}
while()
{}
}
Advantages of Loop Testing:

The advantages of Loop testing are:

 Loop testing limits the number of iterations of loop.


 Loop testing ensures that program doesn’t go into infinite loop process.
 Loop testing endures initialization of every used variable inside the loop.
 Loop testing helps in identification of different problems inside the loop.
 Loop testing helps in determination of capacity.

Disadvantages of Loop Testing:


The disadvantages of Loop testing are:
Loop testing is mostly effective in bug detection in low-level software.
Loop testing is not useful in bug detection.

Integration Testing:
It is the process of testing the interface between two software units or modules. It focuses on
determining the correctness of the interface. The purpose of integration testing is to expose faults
in the interaction between integrated units. Once all the modules have been unit tested,
integration testing is performed.

Integration testing is a software testing technique that focuses on verifying the interactions and
data exchange between different components or modules of a software application. The goal of
integration testing is to identify any problems or bugs that arise when different components are
combined and interact with each other. Integration testing is typically performed after unit testing
and before system testing.
Verification and Validation Testing:

Verification is a process of determining if the software is designed and developed as per the
specified requirements. Validation is the process of checking if the software (end product) has
met the Customer’s true needs and expectations.

Validation testing is the process of assessing a new software product to ensure that its
performance matches Customer needs. Product development teams might perform validation
testing to learn about the integrity of the product itself and its performance in different
environments.

System Testing:

System testing is a type of software testing that evaluates the overall functionality and
performance of a complete and fully integrated software solution. It tests if the system meets the
specified requirements and if it is suitable for delivery to the end-users. This type of testing is
performed after the integration testing and before the acceptance testing.

Example: Each component of the automobile, such as the seats, steering, mirror, brake, cable,
engine, car structure, and wheels, is made independently. After each item is manufactured, it is
tested separately to see whether it functions as intended.
Software Quality:

Software quality engineering (SQE) is the process of implementing quality checks throughout
the entire development cycle. SQE plays a key role in ensuring fast-paced agile and DevOps
teams produce high-quality software.

Product Metrics:

Product metrics in software engineering refer to the quantifiable measurements used to assess the
characteristics and performance of software products throughout their development and
maintenance lifecycle. These metrics provide valuable insights into various aspects of software
quality, effectiveness, efficiency, and reliability.

Measure
Provides a quantitative indication of the extent, amount, dimension, capacity or size of some
attribute of a product or process

Product Metrics for analysis, Design, Test and maintenance:

Product metrics for the Analysis model:


Function Point Metric- It is first proposed by Albrecht. It measures the functionality delivered by
the system.

FP computed from the following parameters


1. Number of external inputs (EIS)
2. Number external outputs (EOS)
3. Number of external Inquiries (EQS)
4. Number of Internal Logical Files (ILF)
5. Number of external interface files (EIFS)
6. Each parameter is classified as simple, average or complex

Function Point Analysis:


What is Function Point Analysis (FPA)?

• It is designed to estimate and measure the time, and thereby the cost, of developing new
software applications and maintaining existing software applications.

• The main other approach used for measuring the size, and therefore the time required, of
software project is lines of code (LOC)
Function Point Analysis:

These function-point counts are then weighed (multiplied) by their degree of complexity:
Degree of complexity  Simple Average Complex
Inputs 2 4 6
Outputs 3 5 7
Files 5 10 15
Inquires 2 4 6
Interfaces 4 7 10

A simple example:
Inputs
simple X 2 = 6
4 average X 4 = 16
1 complex X 6 = 6
Outputs
6 average X 5 = 30
2 complex X 7 = 14
Files
5 complex X 15 = 75
Inquiries
8 average X 4 = 32
Interfaces
3 average X 7 = 21
3 4 complex X 10 = 40

Unadjusted function points: 240

In addition to these individually weighted function points, there are factors that affect the project
and/or system as a whole. There are a number (~35) of these factors that affect the size of the
project effort, and each is ranked from “0”- no influence to “5”- essential.
The following are some examples of these factors:
• Is high performance critical?
• Is the internal processing complex?
• Is the system to be used in multiple sites and/or by multiple organizations?
• Is the code designed to be reusable?
• Is the processing to be distributed?
And so forth . . .
Continuing our example . . .
Complex internal processing = 3
Code to be reusable = 2
High performance = 4
Multiple sites = 3
Distributed processing = 5
Project adjustment factor = 17
Adjustment calculation:
Adjusted FP = Unadjusted FP X [0.65 + (adjustment factor /100)]
= 240 X [0.65 + ( 17 /100)]
= 240 X [0.82]
= 197 adjusted function points

But how long will the project take and how much will it cost?
As previously measured, programmers in our organization average 18 function points per month.
Thus . . .
197 FP divided by 18 = 11 man-months

If the average programmer is paid $5,200 per month (including benefits), then the [labor] cost of
the project will be . . .
11 man-months X $5,200 = $57,200

Because function point analysis is independent of language used, development platform, etc. it
can be used to identify the productivity benefits of . . .
• One programming language over another
• One development platform over another
• One development methodology over another
• One programming department over another
• Before-and-after gains in investing in programmer training
And so forth . . .

But there are problems and criticisms:


• Function point counts are affected by project size
• Difficult to apply to massively distributed systems or to systems with very complex
internal processing
• Difficult to define logical files from physical files
• Different companies will calculate function points slightly different, making
intercompany comparisons questionable

Metrics for design model:

In designing a product, it is very important to have efficient management of complexity.


Complexity itself means very difficult to understand. We know that systems are generally
complex as they have many interconnected components that make it difficult to understand.
There are three design complexity measures. These are given below:

1. Structural Complexity :

Structural complexity depends upon fan-out for modules. It can be defined as:

S (k) = f2out (k)

Where fout represents fan-out for module k (fan-out means number of modules that are
subordinating module k).

2. Data Complexity:

Data complexity is complexity within interface of internal module. It is size and intricacy of
data. For some module k, it can be defined as:

D (k) = tot_var (k) / [fout(k)+1]

Where the tot_var is total number of input and output variables going to and coming out of a
module.

3. System Complexity :

System complexity is combination of structural and data complexity. It can be denoted as:

Sy (k) = S (k) + D (k)

When structural, data, and system complexity get increased, overall architectural complexity
also gets increased.

4. Complexity metrics :

Complexity metrics are used to measure complexity of overall software. The computation if
complexity metrics can be done with help of a flow graph. It is sometimes called cyclomatic
complexity. The cyclomatic complexity is a useful metric to indicate complexity of software
system. Without use of complexity metrics, it is very difficult and time-consuming to determine
complexity in designing products where risk cost emanates. Even continuous complexity
analysis makes it difficult for project team and management to solve problem. Measuring
Software complexity leads to improve code quality, increase productivity, meet architectural
standards, reduce overall cost, increases robustness, etc. To calculate cyclomatic complexity,
following equation is used:

Cyclomatic complexity= E - N + 2
Where, E is total number of edges and N is total number of nodes.
Example:
In diagram given below, you can see number of edges and number of nodes.

`
So, the Cyclomatic complexity can be calculated as:

Given,
E = 10,
N = 8

So,
Cyclomatic complexity
= E - N + 2
= 10 – 8 + 2
= 4
Metrics for object oriented:

Metrics for OO Systems must be tuned to the characteristics that distinguish OO from
conventional Software. There are five characteristics that lead specialized metrics:
• Localization
• Encapsulation
• Information hiding
• Inheritance
• Abstraction
Class oriented Metrics:
• Class Size
• Number of operations overridden by a subclass
• Number of operations added by a subclass
• Specialization Index - This metric helps evaluate the quality of a subclass. A good
subclass is usually an extension of the capabilities of its super classes.

Component design metrics:

• Cohesion
• Coupling
• Complexity – example: Cyclomatic complexity

Source Code Metrics:

• Source code metrics can be broadly divided into five categories, based on what they
measure: size, complexity, coupling, cohesion, and inheritance.

Maintenance Metrics:

When development of a software product is complete and it is released to the market, it enters
the maintenance phase of its life cycle. During this phase the defect arrivals by time interval and
customer problem calls. Software maintenance is the process of modifying and updating the
software according to the customer’s requirements. Its purpose is to correct the fault and improve
the software performance after it has been delivered to the customers.

Why maintenance is necessary?

• To fix the bugs and errors in software system.


• To improve the functionality of the software to make your product more compatible with
the latest marketing and business environments.
• To remove outdated functions from your software that is inhibiting your product
efficiency.
• To improve your software performance.

There are four types of maintenance metrics:

• Corrective Software Maintenance.


• Preventive Software Maintenance.
• Perfective Software Maintenance.
• Adaptive Software Maintenance.
Corrective Software Maintenance: Defect in the software arises due to errors and faults in
design, logic, and code of the software. Corrective maintenance action (commonly referred to as
“bug fixing”) addresses these errors and faults in the software system.

Preventive Software Maintenance: Preventive maintenance is a software change you make to


prevent the occurrence of errors in the future. It increases the software maintainability by
reducing its complexity. Preventive maintenance task include:

Updating the documentation with respect to current state of the system

• Optimizing the code: Modifying the code for faster execution of programs or making
efficient use of storage space.

• Reconstructing the code: Transforming the structure of the program by reducing the
source code, making it easily understandable.

Perfective Software Maintenance: It means- the process of modifying Software or


Applications to implement new or changed user requirements which concern functional
enhancements.

Adaptive Software Maintenance: Allows the system to adjust the platform needs to changing
software specifications. It ensures that the technology stack keeps up with growth.

Advantages of Maintenance:
 Performance improvement
 Fixes various Bugs
 Up to date with current trends
 No need to spend extra bucks.

You might also like