SE-UNIT-IV
SE-UNIT-IV
me/jntuh
UNIT-IV
SHORT Q&A:
Verification Validation
1. Verification is a static practice of 1. Validation is a dynamic mechanism of
verifying documents, design, code and validating and testing the actual product.
program.
2. It does not involve executing the code. 2. It always involves executing the code.
3. It is human based checking of 3. It is computer based execution of
documents and files. program.
4. Verification uses methods like 4. Validation uses methods like black box
inspections, reviews, walkthroughs, and (functional) testing, gray box testing, and
Desk-checking etc. white box (structural) testing etc.
5. Verification is to check whether the 5. Validation is to check whether software
software conforms to specifications. meets the customer expectations and
requirements.
6. It can catch errors that validation cannot 6. It can catch errors that verification
catch. It is low level exercise. cannot catch. It is High Level Exercise.
7. Target is requirements specification, 7. Target is actual product-a unit, a module,
application and software architecture, high a bent of integrated modules, and effective
level, complete design, and database final product.
design etc.
8. Verification is done by QA team to 8. Validation is carried out with the
ensure that the software is as per the involvement of testing team.
A test case is a set of conditions or variables under which a tester will determine whether a system
under test satisfies requirements or works correctly. The process of developing test cases can also
help find problems in the requirements or design of an application.
f tests is designed to expose errors that will keep the build from
properly performing its function.
is integrated with other builds and the entire product (in its current
form) is smoke tested daily.
-up integration testing as its name implies, begins construction and testing with
atomic modules.
-up integration strategy may be implemented with the following four steps:
structure.
9.What are the testing principles the software engineer must apply while performing the
software testing?
LONG Q&A:
1) Discuss black box testing in a detailed view
Black Box Testing
BLACK BOX TESTING. also known as Behavioral Testing. is a software testing method in
wnicn the internal structure/design/implementation of the item being testea is not known to the
tester. These tests can be functional or non-functional. though usually functional.
This method is named so because the software program. in the eyes of the tester. is like a black
box. inside wnicn one cannot see. Tnis metnod attempts to find errors in tne following
categories:
Example
A tester. without knowledge of the internal structures of a website. tests the web pages by using
a browser. providing inputs (clicks. keystrokes) and verifying tne outputs against tne expected
outcome
Levels Applicable To
Black BoX Testing method is applicable to the following levels of software testing:
Integration Testing
System Testing
Acceptance Testing
The higher the level. and hence tne bigger and more black
Dox testing metnod comes into use.
Following are some techniques that can be used for designing black box tests
input values into valid and invalid partitions and selecting representative values from
each partition as test data.
Boundary Value Analyst's. It is a software test design technique tnat involves the
determination of boundaries for input values and selecting values that are at the
boundaries and just inside/ outside of tne boundaries as test data
Cause Etfect Graphing. It is a software test design technique tnat involves identifying
tne cases (input conditions) and effects (output conditions) producing a Cause Effect
Graph. and generating test cases accordingly.
Advantages
Tests are done from a point of view and wiII help in exposing
discrepancies in tne specifications.
Tester need not know programming languages or how tne software has been
implemented
Tests can be conducted by a body independent from the developers. allowing for an
objective perspective and the avoidance of developer bias
Test cases can be designed as soon as the specifications are complete
Disadvantages
Only a small number of possible inp uts can be tested and many program paths will be
left untested.
Witnout clear specifications. whicn is tne situation in many projects. test cases will be
difficult to design
Tests can be redundant if the software designer/developer has already run a test case
Ever wondered why a soothsayer closes tne eyes when foretelling events? So is almost
the case in BIacK Box Testing
This model classifies all software requirements into 11 software quality factors. The 11 factors
are grouped into three categories product operation, product revision, and product
transition factors.
Product operation factors Correctness, Reliability, Efficiency, Integrity,
Usability.
Product revision factors Maintainability, Flexibility, Testability.
Product transition factors Portability, Reusability, Interoperability.
Correctness
These requirements deal with the correctness of the output of the software system. They include
Output mission
The required accuracy of output that can be negatively affected by inaccurate data or
inaccurate calculations.
The completeness of the output information, which can be affected by incomplete data.
The up-to-dateness of the information defined as the time between the event and the
response by the software system.
The availability of the information.
The standards for coding and documenting the software system.
Reliability
Reliability requirements deal with service failure. They determine the maximum allowed failure
rate of the software system, and can refer to the entire system or to one or more of its separate
functions.
Efficiency
It deals with the hardware resources needed to perform the different functions of the software
system. It includes processing capabilities (given in MHz), its storage capacity (given in MB or
GB) and the data communication capability (given in MBPS or GBPS).
It also deals with the time between recharging of the units, such as,
information system units located in portable computers, or meteorological units placed
outdoors.
Integrity
This factor deals with the software system security, that is, to prevent access to unauthorized
persons, also to distinguish between the group of people to be given read as well as write
permit.
Usability
Usability requirements deal with the staff resources needed to train a new employee and to
operate the software system.
Product Revision Quality Factors
According to model, three software quality factors are included in the product revision
category. These factors are as follows
Maintainability
This factor considers the efforts that will be needed by users and maintenance personnel to
identify the reasons for software failures, to correct the failures, and to verify the success of
the corrections.
Flexibility
This factor deals with the capabilities and efforts required to support adaptive maintenance
activities of the software. These include adapting the current software to additional
support perfective maintenance activities, such as changes and additions to the software in
environment.
Testability
Testability requirements deal with the testing of the software system as well as with its
operation. It includes predefined intermediate results, log files, and also the automatic
diagnostics performed by the software system prior to starting the system, to find out whether
all components of the system are in working order and to obtain a report about the detected
faults. Another type of these requirements deals with automatic diagnostic checks applied by
the maintenance technicians to detect the causes of software failures.
Product Transition Software Quality Factor
According to model, three software quality factors are included in the product
transition category that deals with the adaptation of software to other environments and its
interaction with other software systems. These factors are as follows
Portability
Portability requirements tend to the adaptation of a software system to other environments
consisting of different hardware, different operating systems, and so forth. The software
should be possible to continue using the same basic software in diverse situations.
Reusability
This factor deals with the use of software modules originally designed for one project in a new
software project currently being developed. They may also enable future projects to make use
of a given module or a group of modules of the currently developed software. The reuse of
software is expected to save development resources, shorten the development period, and
provide higher quality modules.
Interoperability
Interoperability requirements focus on creating interfaces with other software systems or with
other equipment firmware. For example, the firmware of the production machinery and
testing equipment interfaces with the production control software.
b) Metrics for maintenance can be used for the development of new software and the
maintenance of existing software.
IEEE Std. 982.1suggests a software maturity index (SMI) that provides an indication of
the stability of a software
Integration testing is a systematic technique for constructing the program structure while at the
same time conducting tests to uncover errors associated with interfacing.
The objective is to take unit tested components and build a program structure that has been
dictated by design.
Top-down integration testing is an incremental approach to construction of program structure.
Modules are integrated by moving downward through the control hierarchy, beginning with the
main control module (main program). Modules subordinate to the main control module are
incorporated into the structure in either a depth-first or breadth-first manner.
The integration process is performed in a series of five steps:
1. The main control module is used as a test driver and stubs are substituted for all components
directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first),subordinate stubs
are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real component.
5. Regression testing may be conducted to ensure that new errors have not been introduced.
Bottom-up integration testing, as its name implies, begins construction and testing with atomic
modules (i.e., components at the lowest levels in the program structure). Because components are
integrated from the bottom up, processing required for components subordinate to a given level is
always available and the need for stubs is eliminated.
A bottom-up integration strategy may be implemented with the following steps:
1. Low-level components are combined into clusters (sometimes called builds) that perform a
specific software sub function.
2. A driver (a control program for testing) is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.
Regression testing is the re execution of some subset of tests that have already been conducted to
ensure that changes have not propagated unintended side effects.
Smoke Testing
Measures, Metrics, and Indicators: These three terms are often used interchangeably, but they can
have subtle differences
Measure Provides a quantitative indication of the extent, amount, dimension, capacity,
Metric (IEEE) A quantitative measure of the degree to which a system, component, or
process possesses a given attribute
or size of some attribute of a product or process
Measurement The act of determining a measure
Indicator A metric or combination of metrics that provides insight into the software
process, a software project, or the product itself
System Testing
2) Stress Testing
4) Performance Testing
5) deployment testing
In system testing the software and other system elements are tested as a whole.
To test computer software, you spiral out in a clockwise direction along streamlines that
increase the scope of testing with each turn.
System testing verifies that all elements mesh properly and that overall system
function/performance is achieved.
Recovery testing is a system test that forces the software to fail in a variety of ways and
verifies that recovery is properly performed.
If recovery is automatic (performed by the system itself), re initialization, check pointing
mechanisms, data recovery, and restart are evaluated for correctness. If recovery requires
human intervention, the mean-time-to-repair (MTTR) is evaluated to determine whether it
is within acceptable limits.
Security testing attempts to verify that protection mechanisms built into a system will, in
fact, protect it from improper penetration.
During security testing, the tester plays the role(s) of the individual who desires to penetrate the
system.
Stress testing executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume.
A variation of stress testing is a technique called sensitivity testing.
Performance testing is designed to test the run-time performance of software within the
context of an integrated system.
Performance testing occurs throughout all steps in the testing process.
Even at the unit level, the performance of an individual module may be assessed as tests are
conducted.
Deployment testing, sometimes called configuration testing, exercises the software in
each environment in which it is to operate.
In addition, deployment testing examines all installation procedures and specialized
installation software that will be used by customers, and all documentation that will be
used to introduce the software to end users.
a) Software failure
b) Black box testing, White box testing and Stress Testing
a) Software Failure
A failure that occurs when the user perceives that the software has ceased to deliver the expected
result with respect to the specification input values.
Major factors that lead to software project failure are:
i) Application or error
ii) Environmental factors
iii) Infrastructure
iv) Virus
v) Hackers. e.t. c.
Software failures or incorrect software requirements can have severe consequences including
customer dissatisfaction, the loss of financial assets and even loss of human lives.
b) BLACK-BOX TESTING
-Box Testing alludes to tests that are conducted at the software interface. A black-
box test examines some fundamental aspect of a system with little regard for the internal logical
structure of the software.
alues?
Why Cover?
probability
intuitive.
Stress testing
Stress testing is a type of Software testing that verifies the stability & reliability of the system.
Stress testing is done to make sure that the system would not crash under crunch situations.
Client
1
Client Client
Server
4 2
Client
3
appear in one part of a program, while the cause may actually be located at a site that is far
removed. Highly coupled components exacerbate this situation.
mbedded systems
that couple hardware and software inextricably.
on different processors.
tastrophic.
Psychological Considerations
problem solving or brain teasers, coupled with the annoying recognition that you have made a
mistake. Heightened anxiety and the unwillingness to accept the possibility of error increases the
task difficulty. Fortunately, there is a great sigh of relief and a lessening of tension when the bug
is ultimately corrected.
Debugging Strategies
ce
debugging
correction of a bug can introduce other errors and therefore do more harm than good. Van Vleck
suggests three simple questions that every software engineer should ask before making the
What could we have done to prevent this bug in the first place?
-based metrics: use the function point as a normalizing factor or as a measure of the
Function-Based Metrics
Function Points
Where count total is the sum of all FP entries obtained from figure.
(i= 1 to 14) are value adjustment factors based on responses to the following questions:
4. Is performance critical?
7. Does the on-line data entry require the input transaction to be built over multiple
screens or operations?
14. Is the application designed to facilitate change and for ease of use by the user?
The constant values in equation-1 and the weighting factors that are applied to information
domain counts are determined empirically.
There are only a few metrics that have been proposed for the analysis model. However, it is
possible to use metrics for project estimation in the context of the analysis model. These metrics
are used to examine the analysis model with the objective of predicting the size of the resultant
system. Size acts as an indicator of increased coding, integration, and testing effort; sometimes it
also acts as an indicator of complexity involved in the software design. Function point and lines
of code are the commonly used methods for size estimation.
Function Point (FP) Metric
The function point metric, which was proposed by A.J Albrecht, is used to measure the
functionality delivered by the system, estimate the effort, predict the number of errors, and
estimate the number of components in the system. Function point is derived by using a
relationship between the complexity of software and the information domain value. Information
domain values used in function point include the number of external inputs, external outputs,
external inquires, internal logical files, and the number of external interface files.
Lines of Code (LOC)
Lines of code (LOC) is one of the most widely used methods for size estimation. LOC can be
defined as the number of delivered lines of code, excluding comments and blank lines. It is
highly dependent on the programming language used as code writing varies from one
programming language to another. Fur example, lines of code written (for a large program) in
assembly language are more than lines of code written in C++.
From LOC, simple size-oriented metrics can be derived such as errors per KLOC (thousand lines
of code), defects per KLOC, cost per KLOC, and so on. LOC has also been used to predict
program complexity, development effort, programmer performance, and so on. For example,
Hasltead proposed a number of metrics, which are used to calculate program length, program
volume, program difficulty, and development effort.
Metrics for Specification Quality
To evaluate the quality of analysis model and requirements specification, a set of characteristics
has been proposed. These characteristics include specificity, completeness, correctness,
understandability, verifiability, internal and external consistency, &achievability, concision,
traceability, modifiability, precision, and reusability.
Most of the characteristics listed above are qualitative in nature. However, each of these
characteristics can be represented by using one or more metrics. For example, if there are
nr requirements in a specification, then nr can be calculated by the following equation.
nr =nf +nrf
Where
Where, e(z) is calculated for module z with the help of equation (1). Summation in the
denominator is the sum of Halstead effort (e) in all the modules of the system.
For developing metrics for object-oriented (OO) testing, different types of design metrics that
have a direct impact on the testability of object-oriented system are considered. While developing
metrics for OO testing, inheritance and encapsulation are also considered. A set of metrics
proposed for OO testing is listed below.
Lack of cohesion in methods (LCOM): This indicates the number of states to be tested.
LCOM indicates the number of methods that access one or more same attributes. The value of
LCOM is 0, if no methods access the same attributes. As the value of LCOM increases, more
states need to be tested.
Percent public and protected (PAP): This shows the number of class attributes, which are
public or protected. Probability of adverse effects among classes increases with increase in
value of PAP as public and protected attributes lead to potentially higher coupling.
Public access to data members (PAD): This shows the number of classes that can access
attributes of another class. Adverse effects among classes increase as the value of PAD
increases.
Number of root classes (NOR): This specifies the number of different class hierarchies, which
are described in the design model. Testing effort increases with increase in NOR.
Fan-in (FIN): This indicates multiple inheritances. If value of FIN is greater than 1, it indicates
that the class inherits its attributes and operations from many root classes. Note that this
situation (where FIN> 1) should be avoided.
The success of a software project depends largely on the quality and effectiveness of the
software design. Hence, it is important to develop software metrics from which
meaningful indicators can be derived.
With the help of these indicators, necessary steps are taken to design the software
according to the user requirements.
Various design metrics such as architectural design metrics, component-level design
metrics, user-interface design metrics, and metrics for object-oriented design are used to
indicate the complexity, quality, and so on of the software design.
These metrics focus on the features of the program architecture with stress on architectural
structure and effectiveness of components (or modules) within the architecture. In architectural
design metrics, three software design complexity measures are defined, namely, structural
complexity, data complexity, and system complexity.
complexity is
calculated by the following equation.
System complexity is the sum of structural complexity and data complexity and is calculated by
the following equation.
C(j) = S(j) + D(j)
The complexity of a system increases with increase in structural complexity, data complexity,
and system complexity, which in turn increases the integration and testing effort in the later
stages.
Morphology metrics
In addition, various other metrics like simple morphology metrics are also used. These metrics
allow comparison of different program architecture using a set of straightforward dimensions. A
metric can be developed by referring to call and return architecture. This metric can be defined
by the following equation.
Size = n+a
Where
n = number of nodes
a= number of arcs.
For example, there are 11 nodes and 10 arcs. Here, Size can be calculated by the following
equation.
Depth is defined as the longest path from the top node (root) to the leaf node and width is defined
as the maximum number of nodes at any one level.
Coupling of the architecture is indicated by arc-to-node ratio. This ratio also measures the
connectivity density of the architecture and is calculated by the following equation.
r=a/n
r=18/17=1.06
Quality of software design also plays an important role in determining the overall quality of the
software. Many software quality indicators that are based on measurable design characteristics of
a computer program have been proposed. One of them is Design Structural Quality Index
(DSQI), which is derived from the information obtained from data and architectural design. To
calculate DSQI, a number of steps are followed, which are listed below.
Program structure (D1): If discrete methods are used for developing architectural design then
D1= 1, else D1 = 0
3. Once all the intermediate values are calculated, DSQI is calculated by the following equation.
iDi
Where
i = 1 to 6
In order to develop metrics for object-oriented (OO) design, nine distinct and measurable
characteristics of OO design are considered, which are listed below.
B) system testing:
5) deployment testing
In system testing the software and other system elements are tested as a whole.
To test computer software, you spiral out in a clockwise direction along streamlines
that increase the scope of testing with each turn.
System testing verifies that all elements mesh properly and that overall system
function/performance is achieved.
Recovery testing is a system test that forces the software to fail in a variety of ways
and verifies that recovery is properly performed.
If recovery is automatic (performed by the system itself), re initialization, check
pointing mechanisms, data recovery, and restart are evaluated for correctness. If
recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated
to determine whether it is within acceptable limits.
Security testing attempts to verify that protection mechanisms built into a system
will, in fact, protect it from improper penetration.
During security testing, the tester plays the role(s) of the individual who desires to penetrate
the system.
Stress testing executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume.
A variation of stress testing is a technique called sensitivity testing.
Performance testing is designed to test the run-time performance of software within
the context of an integrated system.
Performance testing occurs throughout all steps in the testing process.
Even at the unit level, the performance of an individual module may be assessed as tests are
conducted.
Deployment testing, sometimes called configuration testing, exercises the software
in each environment in which it is to operate.
In addition, deployment testing examines all installation procedures and specialized
installation software that will be used by customers, and all documentation that will
be used to introduce the software to end users.