0% found this document useful (0 votes)
26 views25 pages

SE-UNIT-IV

Uploaded by

mahiyadav2585
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views25 pages

SE-UNIT-IV

Uploaded by

mahiyadav2585
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

www.android.universityupdates.in | www.universityupdates.in | https://telegram.

me/jntuh

UNIT-IV

SHORT Q&A:

1.Define alpha testing


Alpha testing is final testing before the software is released to the general public. It has
two phases: In the first phase of alpha testing, the software is tested by in- house
developers. They use either debugger software, or hardware-assisted debuggers. The
goal is to catch bugs quickly

2.Define system testing


System Testing. SYSTEM TESTING is a level of software testing where a complete and
integrated software is tested. The purpose of this test is to evaluate the system's compliance with
the specified requirements
3.Write Short note on integration testing.
Ans: Integration testing is a systematic technique for constructing the program structure
while at the same time conducting tests to uncover errors associated with
interfacing.
The objective is to take unit tested components and build a program structure that
has been dictated by design.
4.Compare between alpha testing and beta testing.
Ans: The alpha test is conducted at the developer's site by a customer. The software is
used in a natural setting with the developer "looking over the shoulder" of the user
and recording errors and usage problems. Alpha tests are conducted in a controlled
environment.
The beta test is conducted at one or more customer sites by the end-user of the software.
Unlike alpha testing, the developer is generally not present.
5.Compare (Differentiate) between verification and validation?

Verification Validation
1. Verification is a static practice of 1. Validation is a dynamic mechanism of
verifying documents, design, code and validating and testing the actual product.
program.
2. It does not involve executing the code. 2. It always involves executing the code.
3. It is human based checking of 3. It is computer based execution of
documents and files. program.
4. Verification uses methods like 4. Validation uses methods like black box
inspections, reviews, walkthroughs, and (functional) testing, gray box testing, and
Desk-checking etc. white box (structural) testing etc.
5. Verification is to check whether the 5. Validation is to check whether software
software conforms to specifications. meets the customer expectations and
requirements.
6. It can catch errors that validation cannot 6. It can catch errors that verification
catch. It is low level exercise. cannot catch. It is High Level Exercise.
7. Target is requirements specification, 7. Target is actual product-a unit, a module,
application and software architecture, high a bent of integrated modules, and effective
level, complete design, and database final product.
design etc.
8. Verification is done by QA team to 8. Validation is carried out with the
ensure that the software is as per the involvement of testing team.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

specifications in the SRS document.


9. It generally comes first-done before 9. It generally follows after verification.
validation.

6.Illustrate test cases in Software Engineering?

A test case is a set of conditions or variables under which a tester will determine whether a system
under test satisfies requirements or works correctly. The process of developing test cases can also
help find problems in the requirements or design of an application.

7. Describe smoke testing

Ans. Smoke Testing

components that are required to implement one or more product functions.

f tests is designed to expose errors that will keep the build from
properly performing its function.

highest likelihood of throwing the software project behind schedule.

is integrated with other builds and the entire product (in its current
form) is smoke tested daily.

8. List out the steps for bottom-up integration.

Ans. Bottom-Up Integration

-up integration testing as its name implies, begins construction and testing with
atomic modules.

-up integration strategy may be implemented with the following four steps:

-level components are combined into clusters that perform a specific


software subfunction.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

structure.

9.What are the testing principles the software engineer must apply while performing the
software testing?

There are seven principles in software testing:


Testing shows presence of defects.
Exhaustive testing is not possible.
Early testing.
Defect clustering.
Pesticide paradox.
Testing is context dependent.
Absence of errors fallacy.

10.What is the difference between STLC&SDLC?


SDLC is a Development Life Cycle whereas STLC is a Testing Life Cycle. ... The SDLC life
cycle helps a team to complete successful development of the software while STLC phases only
cover software testing.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

LONG Q&A:
1) Discuss black box testing in a detailed view
Black Box Testing
BLACK BOX TESTING. also known as Behavioral Testing. is a software testing method in
wnicn the internal structure/design/implementation of the item being testea is not known to the
tester. These tests can be functional or non-functional. though usually functional.

This method is named so because the software program. in the eyes of the tester. is like a black
box. inside wnicn one cannot see. Tnis metnod attempts to find errors in tne following
categories:

Incorrect or missing functions


Interface errors
Errors in data structures or external database access
Behavior or performance errors
Initialization and termination errors
black box testing: Testing eitner functional or non-functional. without reference to the
internal structure of tne component or system.
black box test design technique: Procedure to derive and/or select test cases based on
an analysis of the specification. either functional or non functional. of a component or
system without reference to its internal structure

Example

A tester. without knowledge of the internal structures of a website. tests the web pages by using
a browser. providing inputs (clicks. keystrokes) and verifying tne outputs against tne expected
outcome
Levels Applicable To

Black BoX Testing method is applicable to the following levels of software testing:

Integration Testing

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

System Testing
Acceptance Testing

The higher the level. and hence tne bigger and more black
Dox testing metnod comes into use.
Following are some techniques that can be used for designing black box tests

input values into valid and invalid partitions and selecting representative values from
each partition as test data.
Boundary Value Analyst's. It is a software test design technique tnat involves the
determination of boundaries for input values and selecting values that are at the
boundaries and just inside/ outside of tne boundaries as test data
Cause Etfect Graphing. It is a software test design technique tnat involves identifying
tne cases (input conditions) and effects (output conditions) producing a Cause Effect
Graph. and generating test cases accordingly.

Advantages

Tests are done from a point of view and wiII help in exposing
discrepancies in tne specifications.
Tester need not know programming languages or how tne software has been
implemented
Tests can be conducted by a body independent from the developers. allowing for an
objective perspective and the avoidance of developer bias
Test cases can be designed as soon as the specifications are complete
Disadvantages

Only a small number of possible inp uts can be tested and many program paths will be
left untested.
Witnout clear specifications. whicn is tne situation in many projects. test cases will be
difficult to design

Tests can be redundant if the software designer/developer has already run a test case
Ever wondered why a soothsayer closes tne eyes when foretelling events? So is almost
the case in BIacK Box Testing

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

2.Discuss software quality factors? Discuss their relative importance.


The various factors, which influence the software, are termed as software factors. They can be
broadly divided into two categories. The first category of the factors is those that can be
measured directly such as the number of logical errors, and the second category clubs those
factors which can be measured only indirectly. For example, maintainability but each of the
factors is to be measured to check for the content and the quality control.

This model classifies all software requirements into 11 software quality factors. The 11 factors
are grouped into three categories product operation, product revision, and product
transition factors.
Product operation factors Correctness, Reliability, Efficiency, Integrity,
Usability.
Product revision factors Maintainability, Flexibility, Testability.
Product transition factors Portability, Reusability, Interoperability.

Product Operation Software Quality Factors

According to model, product operation category includes five software quality


factors, which deal with the requirements that directly affect the daily operation of the
software. They are as follows

Correctness
These requirements deal with the correctness of the output of the software system. They include

Output mission
The required accuracy of output that can be negatively affected by inaccurate data or
inaccurate calculations.
The completeness of the output information, which can be affected by incomplete data.

The up-to-dateness of the information defined as the time between the event and the
response by the software system.
The availability of the information.
The standards for coding and documenting the software system.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

Reliability
Reliability requirements deal with service failure. They determine the maximum allowed failure
rate of the software system, and can refer to the entire system or to one or more of its separate
functions.

Efficiency
It deals with the hardware resources needed to perform the different functions of the software
system. It includes processing capabilities (given in MHz), its storage capacity (given in MB or
GB) and the data communication capability (given in MBPS or GBPS).
It also deals with the time between recharging of the units, such as,
information system units located in portable computers, or meteorological units placed
outdoors.

Integrity
This factor deals with the software system security, that is, to prevent access to unauthorized
persons, also to distinguish between the group of people to be given read as well as write
permit.

Usability
Usability requirements deal with the staff resources needed to train a new employee and to
operate the software system.
Product Revision Quality Factors
According to model, three software quality factors are included in the product revision
category. These factors are as follows

Maintainability
This factor considers the efforts that will be needed by users and maintenance personnel to
identify the reasons for software failures, to correct the failures, and to verify the success of
the corrections.

Flexibility
This factor deals with the capabilities and efforts required to support adaptive maintenance
activities of the software. These include adapting the current software to additional

support perfective maintenance activities, such as changes and additions to the software in

environment.

Testability
Testability requirements deal with the testing of the software system as well as with its
operation. It includes predefined intermediate results, log files, and also the automatic

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

diagnostics performed by the software system prior to starting the system, to find out whether
all components of the system are in working order and to obtain a report about the detected
faults. Another type of these requirements deals with automatic diagnostic checks applied by
the maintenance technicians to detect the causes of software failures.
Product Transition Software Quality Factor

According to model, three software quality factors are included in the product
transition category that deals with the adaptation of software to other environments and its
interaction with other software systems. These factors are as follows

Portability
Portability requirements tend to the adaptation of a software system to other environments
consisting of different hardware, different operating systems, and so forth. The software
should be possible to continue using the same basic software in diverse situations.

Reusability
This factor deals with the use of software modules originally designed for one project in a new
software project currently being developed. They may also enable future projects to make use
of a given module or a group of modules of the currently developed software. The reuse of
software is expected to save development resources, shorten the development period, and
provide higher quality modules.

Interoperability
Interoperability requirements focus on creating interfaces with other software systems or with
other equipment firmware. For example, the firmware of the production machinery and
testing equipment interfaces with the production control software.

3. a) Describe the framework for software product metrics.


b) What are the metrics used for software maintenance

Ans: a) Framework for software product metrics


Measure:
-A measure provides a quantitative indication of the extent, amount, dimension, capacity,
or size of some attribute of a product or process.
- Measurement is the act of determining a measure.
Metric:
-metric as a quantitative measure of the degree to which a system, component, or process
possesses a given attribute.
Indicator:
An indicator is a metric or combination of metrics that provide insight in to the software
process, a software project, or the product itself.

b) Metrics for maintenance can be used for the development of new software and the
maintenance of existing software.
IEEE Std. 982.1suggests a software maturity index (SMI) that provides an indication of
the stability of a software

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

The following information is determined:


MT = the number of modules in the current release
Fc = the number of modules in the current release that have been changed
Fa = the number of modules in the current release that have been added
Fd = the number of modules from the preceding release that were deleted in the current
release
The software maturity index is computed in the following manner:
SMI = [MT _ (Fa + Fc + Fd)]/MT
As SMI approaches 1.0, the product begins to stabilize.

4. Explain brief about integration testing strategies.


Ans:

Integration testing is a systematic technique for constructing the program structure while at the
same time conducting tests to uncover errors associated with interfacing.
The objective is to take unit tested components and build a program structure that has been
dictated by design.
Top-down integration testing is an incremental approach to construction of program structure.
Modules are integrated by moving downward through the control hierarchy, beginning with the
main control module (main program). Modules subordinate to the main control module are
incorporated into the structure in either a depth-first or breadth-first manner.
The integration process is performed in a series of five steps:
1. The main control module is used as a test driver and stubs are substituted for all components
directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first),subordinate stubs
are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real component.
5. Regression testing may be conducted to ensure that new errors have not been introduced.
Bottom-up integration testing, as its name implies, begins construction and testing with atomic
modules (i.e., components at the lowest levels in the program structure). Because components are
integrated from the bottom up, processing required for components subordinate to a given level is
always available and the need for stubs is eliminated.
A bottom-up integration strategy may be implemented with the following steps:
1. Low-level components are combined into clusters (sometimes called builds) that perform a
specific software sub function.
2. A driver (a control program for testing) is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.
Regression testing is the re execution of some subset of tests that have already been conducted to
ensure that changes have not propagated unintended side effects.
Smoke Testing

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

5. Discuss a framework for product metrics?

Measures, Metrics, and Indicators: These three terms are often used interchangeably, but they can
have subtle differences
Measure Provides a quantitative indication of the extent, amount, dimension, capacity,
Metric (IEEE) A quantitative measure of the degree to which a system, component, or
process possesses a given attribute
or size of some attribute of a product or process
Measurement The act of determining a measure
Indicator A metric or combination of metrics that provides insight into the software
process, a software project, or the product itself

The Challenge of Product Metrics:- Characterize- To gain understanding of processes, products,


resources, and environments.

1. Evaluate -To determine status with respect to plans.


2. Predict -To plan.
3. To Improve.
Measurement Principles:-
Formulation - The derivation of software measures and metrics appropriate for the
representation of the software that is being considered.
Collection-The mechanism used to accumulate data required to derive the formulated metrics.
Analysis-The computation of metrics and the application of mathematical tools.
Interpretation- The evaluation of metrics resulting in insight into the quality of the
representation.
Feedback-Recommendations derived from the interpretation of product metrics transmitted to
the software team.
Characterizing and Validating Metrics:-
A metric should have desirable mathematical properties
It should have a meaningful range (e.g., zero to ten)
It should not be set on a rational scale if it is composed of components measured on an
ordinal scale
If a metric represents a software characteristic that increases when positive traits occur or
decreases when undesirable traits are encountered, the value of the metric should increase or
decrease in the same manner.
Each metric should be validated empirically in a wide variety of contexts before being
published or used to make decisions
It should measure the factor of interest independently of other factors
It should scale up to large systems
It should work in a variety of programming languages and system domains.
Goal-Oriented Software Measurement:-
The Goal/Question/Metric (GQM) paradigm as a technique for identifying meaningful
metrics for any part of the software Process.
GQM emphasizes the need to
(1) establish an explicit measurement goal that is specific to the process activity or product
characteristic that is to be assessed,
(2) define a set of questions that must be answered in order to achieve the goal, and
(3) identify well-formulated metrics that help to answer these questions.
A goal definition template can be used to define each measurement goal. The template takes
the form:
Analyze {the name of activity or attribute to be measured}
For the purpose of {the overall objective of the analysis2}
With respect to {the aspect of the activity or attribute that is considered}
From the viewpoint of {the people who have an interest in the measurement}
In the context of {the environment in which the measurement takes place}.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

The Attributes of Effective Software Metrics:-


Simple and computable:- It should be relatively easy to learn how to derive the metric, and its
computation should not demand inordinate effort or time.
Empirically and intuitively persuasive.:-
about the product attribute under consideration.
Consistent and objective: The metric should always yield results that are unambiguous.
Consistent in its use of units and dimensions:- The mathematical computation of the metric should
use measures that do not lead to bizarre combinations of units.
Programming language independent:- Metrics should be based on the requirements model, the
design model, or the structure of the program itself.
An effective mechanism for high-quality feedback:- That is, the metric should provide you with
information that can lead to a higher-quality end product.
product metrics landscape:
Various metrics formulated for products in the development process are listed below.
Metrics for analysis model: These address various aspects of the analysis model such as
system functionality, system size, and so on.
Metrics for design model: These allow software engineers to assess the quality of design and
include architectural design metrics, component-level design metrics, and so on.
Metrics for source code: These assess source code complexity, maintainability, and other
characteristics.
Metrics for testing: These help to design efficient and effective test cases and also evaluate the
effectiveness of testing.
Metrics for maintenance: These assess the stability of the software product.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

6 a) Compare black box testing with white box testing.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

6 b) Write a long note on system testing?

System Testing

Its primary purpose is to test the complete software.


1)Recovery Testing
1) Security Testing

2) Stress Testing
4) Performance Testing
5) deployment testing
In system testing the software and other system elements are tested as a whole.
To test computer software, you spiral out in a clockwise direction along streamlines that
increase the scope of testing with each turn.
System testing verifies that all elements mesh properly and that overall system
function/performance is achieved.
Recovery testing is a system test that forces the software to fail in a variety of ways and
verifies that recovery is properly performed.
If recovery is automatic (performed by the system itself), re initialization, check pointing
mechanisms, data recovery, and restart are evaluated for correctness. If recovery requires
human intervention, the mean-time-to-repair (MTTR) is evaluated to determine whether it
is within acceptable limits.
Security testing attempts to verify that protection mechanisms built into a system will, in
fact, protect it from improper penetration.
During security testing, the tester plays the role(s) of the individual who desires to penetrate the
system.
Stress testing executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume.
A variation of stress testing is a technique called sensitivity testing.
Performance testing is designed to test the run-time performance of software within the
context of an integrated system.
Performance testing occurs throughout all steps in the testing process.
Even at the unit level, the performance of an individual module may be assessed as tests are
conducted.
Deployment testing, sometimes called configuration testing, exercises the software in
each environment in which it is to operate.
In addition, deployment testing examines all installation procedures and specialized
installation software that will be used by customers, and all documentation that will be
used to introduce the software to end users.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


7.Write short note on
www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

a) Software failure
b) Black box testing, White box testing and Stress Testing

a) Software Failure
A failure that occurs when the user perceives that the software has ceased to deliver the expected
result with respect to the specification input values.
Major factors that lead to software project failure are:
i) Application or error
ii) Environmental factors
iii) Infrastructure
iv) Virus
v) Hackers. e.t. c.
Software failures or incorrect software requirements can have severe consequences including
customer dissatisfaction, the loss of financial assets and even loss of human lives.

b) BLACK-BOX TESTING

-Box Testing alludes to tests that are conducted at the software interface. A black-
box test examines some fundamental aspect of a system with little regard for the internal logical
structure of the software.

alues?

WHITE-BOX TESTING OR GLASS-BOX TESTING

White-Box Testing of software is predicated on close examination of procedural detail.


Logical paths through the software and collaborations between components are tested by
providing test cases that exercise specific sets of conditions and/or loops.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

Why Cover?

probability

intuitive.

paths will contain some.

Stress testing

Stress testing is a type of Software testing that verifies the stability & reliability of the system.

extremely heavy load conditions.

Stress testing is done to make sure that the system would not crash under crunch situations.

Client
1

Client Client
Server
4 2

Client
3

8.Demonstrate art of debugging. Demonstrate metrics for analysis model


THE ART OF DEBUGGING

testing. That is, when a test case uncovers an


error, debugging is an action that results in the removal of the error.

The Debugging Process:

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

found. In the latter case, the person performing debugging may


suspect a cause, design one or more test cases to help validate that suspicion, and work toward
error correction in an iterative fashion.

uman psychology has more to do with an


answer than software technology. However, a few characteristics of bugs provide some clues:

appear in one part of a program, while the cause may actually be located at a site that is far
removed. Highly coupled components exacerbate this situation.

by human error that is not easily traced.

mbedded systems
that couple hardware and software inextricably.

on different processors.

tastrophic.

Psychological Considerations

problem solving or brain teasers, coupled with the annoying recognition that you have made a
mistake. Heightened anxiety and the unwillingness to accept the possibility of error increases the
task difficulty. Fortunately, there is a great sigh of relief and a lessening of tension when the bug
is ultimately corrected.

Debugging Strategies

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


debugging has one overriding objective: to find
www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh
and correct the cause of a software error. The objective is realized by a combination of systematic
evaluation, intuition and luck.

ce

efficient method of isolating the cause of a software error.

that can be used


successfully in small programs.

concept of binary partitioning.

debugging

debugging tools that


provide semi-automated support for the software engineer as debugging strategies are attempted.

The people factor

A fresh viewpoint, unclouded by hours of frustration, can do wonders. A final


maxim for debugging might be: when all else fails, get help.

Correcting the error

correction of a bug can introduce other errors and therefore do more harm than good. Van Vleck
suggests three simple questions that every software engineer should ask before making the

What could we have done to prevent this bug in the first place?

METRICS FOR THE ANALYSIS MODEL

-based metrics: use the function point as a normalizing factor or as a measure of the

indication of quality by measuring number of requirements by


type

Function-Based Metrics

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh
for measuring the functionality delivered by a system.

are derived using an empirical relationship based on countable (direct) measures


of software's information domain and assessments of software complexity

mber of external outputs (EOs)

Function Points

FP = count tot (1)

Where count total is the sum of all FP entries obtained from figure.

(i= 1 to 14) are value adjustment factors based on responses to the following questions:

1. Does the system require reliable backup and recovery?

2. Are specialized data communications required to transfer information to or form the


application?

3. Are there distributed processing functions?

4. Is performance critical?

5. Will the system run in an existing, heavily utilized operational environment?

6 Does the system require on-line data entry?

7. Does the on-line data entry require the input transaction to be built over multiple
screens or operations?

8. Are the ILFs updated on-line?

9. Are the inputs, outputs, files or inquiries complex?

10. Is the internal processing complex?

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


11. Is the code designed to be reusable?
www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

12. Are conversion and installation included in the design?

13. Is the system designed for multiple installations in different organizations?

14. Is the application designed to facilitate change and for ease of use by the user?

Each of these questions is answered using a scale that ranges from 0 to 5.

The constant values in equation-1 and the weighting factors that are applied to information
domain counts are determined empirically.

9.a) demonstrate metrics for analysis model?


b) Describe metrics for source code and for testing?

A ) metrics for analysis model:

There are only a few metrics that have been proposed for the analysis model. However, it is
possible to use metrics for project estimation in the context of the analysis model. These metrics
are used to examine the analysis model with the objective of predicting the size of the resultant
system. Size acts as an indicator of increased coding, integration, and testing effort; sometimes it
also acts as an indicator of complexity involved in the software design. Function point and lines
of code are the commonly used methods for size estimation.
Function Point (FP) Metric
The function point metric, which was proposed by A.J Albrecht, is used to measure the
functionality delivered by the system, estimate the effort, predict the number of errors, and
estimate the number of components in the system. Function point is derived by using a
relationship between the complexity of software and the information domain value. Information
domain values used in function point include the number of external inputs, external outputs,
external inquires, internal logical files, and the number of external interface files.
Lines of Code (LOC)
Lines of code (LOC) is one of the most widely used methods for size estimation. LOC can be
defined as the number of delivered lines of code, excluding comments and blank lines. It is
highly dependent on the programming language used as code writing varies from one
programming language to another. Fur example, lines of code written (for a large program) in
assembly language are more than lines of code written in C++.
From LOC, simple size-oriented metrics can be derived such as errors per KLOC (thousand lines
of code), defects per KLOC, cost per KLOC, and so on. LOC has also been used to predict
program complexity, development effort, programmer performance, and so on. For example,
Hasltead proposed a number of metrics, which are used to calculate program length, program
volume, program difficulty, and development effort.
Metrics for Specification Quality
To evaluate the quality of analysis model and requirements specification, a set of characteristics
has been proposed. These characteristics include specificity, completeness, correctness,
understandability, verifiability, internal and external consistency, &achievability, concision,
traceability, modifiability, precision, and reusability.
Most of the characteristics listed above are qualitative in nature. However, each of these
characteristics can be represented by using one or more metrics. For example, if there are
nr requirements in a specification, then nr can be calculated by the following equation.
nr =nf +nrf
Where

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


nf = number of functional requirements
www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

nnf = number of non-functional requirements.


In order to determine the specificity of requirements, a metric based on the consistency of the
reviewer's understanding of each requirement has been proposed. This metric is represented by
the following equation.
Q1 = nui/nr
Where
nui = number of requirements for which reviewers have same understanding
Q1 = specificity.
Ambiguity of the specification depends on the value of Q. If the value of Q is close to 1 then the
probability of having any ambiguity is less.
Completeness of the functional requirements can be calculated by the following equation.
Q2 = nu / [nj*ns]
Where
nu = number of unique function requirements
ni = number of inputs defined by the specification
ns = number of specified state.
Q2 in the above equation considers only functional requirements and ignores non-functional
requirements. In order to consider non-functional requirements, it is necessary to consider the
degree to which requirements have been validated. This can be represented by the following
equation.
Q3 = nc/ [nc + nnv]
Where
nc= number of requirements validated as correct
nnv= number of requirements, which are yet to be validated.
b) metrics for source code and for testing:

metrics for Coding


Halstead proposed the first analytic laws for computer science by using a set of primitive
measures, which can be derived once the design phase is complete and code is generated. These
measures are listed below.
nl = number of distinct operators in a program
n2 = number of distinct operands in a program
N1 = total number of operators
N2= total number of operands.
By using these measures, Halstead developed an expression for overall program length, program
volume, program difficulty, development effort, and so on.
Program length (N) can be calculated by using the following equation.
N = n1 log2nl + n2 log2n2.
Program volume (V) can be calculated by using the following equation.
V = N log2 (n1+n2).
Note that program volume depends on the programming language used and represents the
volume of information (in bits) required to specify a program. Volume ratio (L)can be calculated
by using the following equation.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


L = Volume of the most compact form of a program
www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

Volume of the actual program


Where, value of L must be less than 1. Volume ratio can also be calculated by using the
following equation.
L = (2/n1)* (n2/N2).
Program difficulty level (D) and effort (E)can be calculated by using the following equations.
D = (n1/2)*(N2/n2).
E = D * V.

Metrics for Software Testing


Majority of the metrics used for testing focus on testing process rather than the technical
characteristics of test. Generally, testers use metrics for analysis, design, and coding to guide
them in design and execution of test cases.
Function point can be effectively used to estimate testing effort. Various characteristics like
errors discovered, number of test cases needed, testing effort, and so on can be determined by
estimating the number of function points in the current project and comparing them with any
previous project.
Metrics used for architectural design can be used to indicate how integration testing can be
carried out. In addition, cyclomatic complexity can be used effectively as a metric in the basis-
path testing to determine the number of test cases needed.
Halstead measures can be used to derive metrics for testing effort. By using program volume (V)
and program level (PL),Halstead effort (e)can be calculated by the following equations.
e = V/ PL
Where
PL = 1/ [(n1/2) * (N2/n2)]
For a particular module (z), the percentage of overall testing effort allocated can be calculated by
the following equation.

Where, e(z) is calculated for module z with the help of equation (1). Summation in the
denominator is the sum of Halstead effort (e) in all the modules of the system.
For developing metrics for object-oriented (OO) testing, different types of design metrics that
have a direct impact on the testability of object-oriented system are considered. While developing
metrics for OO testing, inheritance and encapsulation are also considered. A set of metrics
proposed for OO testing is listed below.

Lack of cohesion in methods (LCOM): This indicates the number of states to be tested.
LCOM indicates the number of methods that access one or more same attributes. The value of
LCOM is 0, if no methods access the same attributes. As the value of LCOM increases, more
states need to be tested.
Percent public and protected (PAP): This shows the number of class attributes, which are
public or protected. Probability of adverse effects among classes increases with increase in
value of PAP as public and protected attributes lead to potentially higher coupling.
Public access to data members (PAD): This shows the number of classes that can access
attributes of another class. Adverse effects among classes increase as the value of PAD
increases.
Number of root classes (NOR): This specifies the number of different class hierarchies, which
are described in the design model. Testing effort increases with increase in NOR.
Fan-in (FIN): This indicates multiple inheritances. If value of FIN is greater than 1, it indicates
that the class inherits its attributes and operations from many root classes. Note that this
situation (where FIN> 1) should be avoided.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

10. a) list the metrics for the design model?

b) Write long notes on system testing?

a.metrics for the design model:

The success of a software project depends largely on the quality and effectiveness of the
software design. Hence, it is important to develop software metrics from which
meaningful indicators can be derived.
With the help of these indicators, necessary steps are taken to design the software
according to the user requirements.
Various design metrics such as architectural design metrics, component-level design
metrics, user-interface design metrics, and metrics for object-oriented design are used to
indicate the complexity, quality, and so on of the software design.

Architectural Design Metrics

These metrics focus on the features of the program architecture with stress on architectural
structure and effectiveness of components (or modules) within the architecture. In architectural
design metrics, three software design complexity measures are defined, namely, structural
complexity, data complexity, and system complexity.
complexity is
calculated by the following equation.

S(j) =f2 out(j)


Where
f out(j) = fan- -out means number of modules that are subordinating
module j].
the help of data complexity,
which is calculated by the following equation.
D(j) = V(j) / [fout(j)+l]
Where

System complexity is the sum of structural complexity and data complexity and is calculated by
the following equation.
C(j) = S(j) + D(j)
The complexity of a system increases with increase in structural complexity, data complexity,
and system complexity, which in turn increases the integration and testing effort in the later
stages.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

Morphology metrics

In addition, various other metrics like simple morphology metrics are also used. These metrics
allow comparison of different program architecture using a set of straightforward dimensions. A
metric can be developed by referring to call and return architecture. This metric can be defined
by the following equation.

Size = n+a

Where

n = number of nodes

a= number of arcs.

For example, there are 11 nodes and 10 arcs. Here, Size can be calculated by the following
equation.

Size = n+a = 17+18=21.

Depth is defined as the longest path from the top node (root) to the leaf node and width is defined
as the maximum number of nodes at any one level.

Coupling of the architecture is indicated by arc-to-node ratio. This ratio also measures the
connectivity density of the architecture and is calculated by the following equation.

r=a/n

r=18/17=1.06

Quality of software design also plays an important role in determining the overall quality of the
software. Many software quality indicators that are based on measurable design characteristics of
a computer program have been proposed. One of them is Design Structural Quality Index
(DSQI), which is derived from the information obtained from data and architectural design. To
calculate DSQI, a number of steps are followed, which are listed below.

1. To calculate DSQI, the following values must be determined.

Number of components in program architecture (S1)


Number of components whose correct function is determined by the Source of input data
(S2)
Number of components whose correct function· depends on previous processing (S 3)
Number of database items (S4)
Number of different database items (S 5)

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


Number of database segments (S6)
www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh
Number of components having single entry and exit (S 7).
2. Once all the values from S1 to S7 are known, some intermediate values are calculated, which
are listed below.

Program structure (D1): If discrete methods are used for developing architectural design then
D1= 1, else D1 = 0

Module independence (D2): D2 = 1-(S2/S1)

Modules not dependent on prior processing (D3): D3 = 1-(S3/S1)

Database size (D4): D4 = 1-(S5/S4)

Database compartmentalization (D5):D5 = 1-(S6/S4)

Module entrance/exit characteristic (D6): D6 = 1-(S7/S1)

3. Once all the intermediate values are calculated, DSQI is calculated by the following equation.

iDi

Where

i = 1 to 6

i = 1 (Wi is the weighting of the importance of intermediate values).

Metrics for Object-oriented Design

In order to develop metrics for object-oriented (OO) design, nine distinct and measurable
characteristics of OO design are considered, which are listed below.

Complexity: Determined by assessing how classes are related to each other


Coupling: Defined as the physical connection between OO design elements
Sufficiency: Defined as the degree to which an abstraction possesses the features required
of it
Cohesion: Determined by analyzing the degree to which a set of properties that the class
possesses is part of the problem domain or design domain
Primitiveness: Indicates the degree to which the operation is atomic
Similarity: Indicates similarity between two or more classes in terms of their structure,
function, behavior, or purpose
Volatility: Defined as the probability of occurrence of change in the OO design
Size: Defined with the help of four different views, namely, population, volume, length,
and functionality. Population is measured by calculating the total number of OO entities,
which can be in the form of classes or operations. Volume measures are collected
dynamically at any given point of time. Length is a measure of interconnected designs
such as depth of inheritance tree. Functionality indicates the value rendered to the user by
the OO application.

B) system testing:

Its primary purpose is to test the complete software.


1)Recovery Testing
2)Security Testing
3)Stress Testing

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh


4) Performance Testing
www.android.universityupdates.in | www.universityupdates.in | https://telegram.me/jntuh

5) deployment testing
In system testing the software and other system elements are tested as a whole.
To test computer software, you spiral out in a clockwise direction along streamlines
that increase the scope of testing with each turn.
System testing verifies that all elements mesh properly and that overall system
function/performance is achieved.
Recovery testing is a system test that forces the software to fail in a variety of ways
and verifies that recovery is properly performed.
If recovery is automatic (performed by the system itself), re initialization, check
pointing mechanisms, data recovery, and restart are evaluated for correctness. If
recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated
to determine whether it is within acceptable limits.
Security testing attempts to verify that protection mechanisms built into a system
will, in fact, protect it from improper penetration.
During security testing, the tester plays the role(s) of the individual who desires to penetrate
the system.
Stress testing executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume.
A variation of stress testing is a technique called sensitivity testing.
Performance testing is designed to test the run-time performance of software within
the context of an integrated system.
Performance testing occurs throughout all steps in the testing process.
Even at the unit level, the performance of an individual module may be assessed as tests are
conducted.
Deployment testing, sometimes called configuration testing, exercises the software
in each environment in which it is to operate.
In addition, deployment testing examines all installation procedures and specialized
installation software that will be used by customers, and all documentation that will
be used to introduce the software to end users.

www.android.previousquestionpapers.com | www.previousquestionpapers.com | https://telegram.me/jntuh

You might also like