0% found this document useful (0 votes)
56 views175 pages

ST Answers QB

Uploaded by

hecker62
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views175 pages

ST Answers QB

Uploaded by

hecker62
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 175

SOFTWARE TESTING

UNIT – 1
PART – A

1.List out the levels of the testing maturity model.

LEVELS OF TESTING MATURITY MODEL (TMM):


Level 1: Initial
Level 2: Phase Definition
Level 3: Integration
Level 4: Management and Measurement
Level 5: Optimization/Defect prevention and Quality Control

2. Define fault and failure.

FAULT:
A fault (defect) is introduced into the software as the result of an error. It
is an anomaly in the software that may cause it to behave incorrectly, and not
according to its specification.

FAILURE:
A failure is the inability of a software system or component to perform its
required functions within specified performance requirements.

3. What are the sources of defects?

SOURCES OF DEFECTS:
 Lack of Education
 Poor Communication
 Oversight
 Transcription
 Immature Process

4. Mention the objective of software testing.

OBJECTIVE OF SOFTWARE TESTING:


 Finding defects which may get created by the programmer while
developing the software.
 Gaining confidence in and providing information about the level of
quality.
 To prevent defects
 To make sure that the end result meets the business and user requirements.
 To ensure that it satisfies the BRS that is Business Requirement
Specification and SRS that is System Requirement Specification.
 To gain the confidence of the customers by providing them a quality
product.

5. Differentiate verification and validation.

VERIFICATION VALIDATION
Validation is a dynamic mechanism
Verification is a static practice of of validating and testing the actual
verifying documents, design, code product.
and program.

It does not involve executing the code It always involves executing the code

It is human based checking of It is computer based execution of


documents and files. program.

Verification uses methods like Validation uses methods like black


inspections, reviews, walkthroughs, box (functional) testing, gray box
and Desk-checking etc. testing, and white box (structural)
testing etc.

Verification is to check whether the Validation is to check whether


software conforms to specifications. software meets the customer
expectations and requirements.

It can catch errors that validation It can catch errors that verification
cannot catch. It is low level exercise. cannot catch. It is High Level
Exercise

It generally comes first-done before It generally follows after verification.


validation.

Verification is done by QA team to Validation is carried out with the


ensure that the software is as per the involvement of testing team.
specifications in the SRS document.

6. Mention the role of process in software quality.


ROLE OF PROCESS IN SOFTWARE QUALITY:
Process, in the software engineering domain , is the set of methods,
practices, standards , documents , activities , policies , and procedures that
software engineers use to develop and maintain a software system and its
associated artifacts , such as project and test plans, design documents code and
manuals

7. Point out the role of defect Repository.

To increase the effectiveness of the testing and debugging processes,


software organizations need to initiate the creation of a defect database, or
defect repository. The defect repository concept supports storage and retrieval
of defect data from all projects in a centrally accessible location.

8. How would you classify the types in defect classes?

Defects are assigned to four major classes reflecting their point of origin in
the software life cycle- the development phases in which they were injected.
These classes are:-
 Requirements\ Specifications
 Design
 Code
 Testing

9. Tell about test, test Oracle and Test Bed.


TEST:
A test is a group of related test cases, or a group of related test cases and
test procedures(steps needed to carry out a test)

TEST ORACLE:
A test oracle is a document, or piece of software that allows testers to
determine whether a test has been passed or failed.

TEST BED:
A test bed is an environment that contains all the hardware and software
needed to test a software components or a software system

10. List the members of the critical groups in testing process.

THE MEMBERS OF THE CRITICAL GROUPS IN TESTING PROCESS.


 Manager
 Tester/Developer
 User/Client

11. List the element of the engineering disciplines.

THE ELEMENT OF THE ENGINEERING DISCIPLINES:


 Basic Principles
 Processes
 Standards
 Measurements
 Tools
 Methods
 Best Practices
 Code of Ethics
 Body of knowledge

12. Compare the process of testing and debugging.

Testing as a dual purpose process reveal defects and to evaluate quality


attributes. Debugging or fault localization is the process of locating the fault or
defect repairing the code, and retesting the code.

13. What is meant by feature defects?

Features may be described as distinguishing characteristics of a software


component or system .Features refers to functional aspects of software that map
to functional requirement described by the user and the client, it also maps
quality such as performance and reliability

14. Why test cases should be developed for both valid and invalid inputs?

A tester must not assume that the software under test will always be
provided with valid inputs. Inputs may be incorrect for several reasons. For
example, software users may have misunderstandings, or lack information about
the nature of the inputs.

15. Mention the role of test engineer in software development organization.

Testing is sometimes erroneously viewed as a destructive activity. The


tester’s job is to reveal defects, find weak points, inconsistent behaviour, and
circumstances where the software does not work as expected.

16. How would formulate the cost of defect?


Organization incurs extra expenses for
 Performing a wrong design based on the wrong requirements;
 Transforming the wrong design into wrong code during the coding phase
 Testing to make sure the product complies with the (wrong requirements
 Releasing the product with the wrong functionality

17. Explain some of the quality metric attributes

SOME OF THE QUALITY METRIC ATTRIBUTES:


 Correctness
 Reliability
 Usability
 Integrity
 Portability
 Maintainability
 Interoperability

18. What is a defect? Give example?

Defect can be classified in many ways. It is important for an organization


to adapt a single classification scheme and apply it to all projects. Developers ,
testers and SQA staff should try to be as consistent as possible when recording
defect data

19. Summarize the major components in software development process

THE MAJOR COMPONENTS IN SOFTWARE DEVELOPMENT


PROCESS :
 Requirement Gathering
 Analysis
 Implementation
 Testing
 Maintenance

20. Error Vs Defect Vs Failure. Discuss

ERROR DEFECT FAILURE


An error is a mistake, A fault (defect) is A failure is the inability
misconception, or introduced into the of a software system or
misunderstanding on the software as the result of component to perform
part of a software an error. its required functions
developer within specified
performance
requirements.
The developers and The Testers identify the The failure finds by the
automation test defect. And it was also manual test engineer
engineers raise the error solved by the developer through
in the development the development cycle.
phase or stage.

PART – B

1.Elaborate on the principles of software testing

 Testing principles are important to test specialists because they provide the
foundation for developing testing knowledge and acquiring testing skills.
 They also provide guidance for defining testing activities as performed in the
practice of a test specialist, A principle can be defined as;
 A general or fundamental, law, doctrine, or assumption,
 A rule or code for conduct,
 The laws or facts of nature underlying the working of an artificial
device.
The principles as stated below only related to execution-based testing.

Principle1:
Testing is the process of exercising a software component using a
selected set of tests cases, with the internet.
 Revealing defects, and
 Evaluating quality.
 Software engineers have made great progress in developing methods to
prevent and eliminate defects. However, defects do occur, and they have
a negative impact on a software quality. This principle supports testing as
an execution-based activity to detect defects.
 The term defect as used in this and in subsequent principle represents any
deviations in the software that have negative impact on its functionality,
performance, reliability, security and other of its specified quality
attributes.

Principle-2:
When the test objectives is to detect defects, then a good test case is
one that has a high probability of revealing a yet undetected defects.
 The goal for the test is to prove / disprove the hypothesis that is,
determine if the specific defect is present / absent.
 A tester can justify the expenditure of the resources by careful test design
so that principle two is supported.

Principle-3:
Test result should be inspected meticulously.

 Tester need to carefully inspect and interpret test results. Several


erroneous and costly scenarios may occur if care is not taken.

Example:
A failure may be overloaded, and the test may be granted a pass status
when in reality the software has failed the test. Testing may continue based on
erroneous test result. The defect may be revealed at some later stage of testing,
but in that case it may be make costly and difficult to locate and repair.

Principle-4:
A test case must contain the expected output or result.

 The test case is of no value unless there is an explicit statement of the


expected outputs or results.

Example:
A specific variable value must be observed or a certain panel button that
must light up.

Principle-5:
Test cases should be developed for both valid and invalid input
conditions.

 The tester must not assume that the software under test will always be
provided with valid inputs.
 Inputs may be incorrect for several reasons.

Example:
Software users may have misunderstandings, or lack information about
the nature of the inputs. They often make typographical errors even when
compute / correct information are available. Device may also provide invalid
inputs due to erroneous conditions and malfunctions.

Principle-6:
The probability of the existence of additional defects in a software
component is proportional to the number of defects already defected in
that component.

Example:
If there are two components A and B and testers have found 20 defects in
A and 3 defects in B, then the probability of the existence of additional defects
in A is higher than B.

Principle-7:
Testing should be carried out by a group that is independent of the
development group.
Tester must realize that
1. Developers have a great deal of pride in their work and
2. On practical level it may be difficult for them to conceptualize
where defects could be found.

Principle-8:
Tests must be repeatable and reusable
This principle calls for experiments in the testing domain to require
recording of the exact condition of the test, any special events that occurred,
equipment used, and a careful accounting of the results.
This information invaluable to the developers when the code is
returned for debugging so that they can duplicate test conditions.

Principle-9:
Testing should be planned.
Test plan should be developed for each level of testing, and objective
for each level should be described in the associated plan.
The objectives should be stated as quantitatively as possible plan, with
their precisely specified objectives.

Principle-10:
Testing activities should be integrated into the software life cycle.
It is no longer feasible to postpone testing activities until after the code
has been written.
Test planning activities into the software lifecycle starting as early as
in the requirements analysis phases, and continue on throughout the software
lifecycle in parallel with development activities.

Principle-11:
Testing is a creative and challenging task.

Difficult and challenges for the tester includes,


A tester needs to have comprehensive knowledge of the software
engineering discipline.
A tester needs to have knowledge from both experience and education
as to how software is specified, designed and developed.

2.(a) Describe about the components of software development process


(b) List and discuss the technological developments that are causing
organizations to revise their approach to testing

3. Write short notes on the list given below


(a) Cost of defect

Cost of Defect
Organization incurs extra expenses for
Performing a wrong design based on the wrong requirements;
Transforming the wrong design into wrong code during the coding phase
Testing to make sure the product complies with the (wrong requirements
Releasing the product with the wrong functionality

How defects from early phases add to the costs.

Compounding effect of defects on software costs.

The cost of building a product and the number of defects in it increases steeply
with the number of defects allowed to seep into the layer phases

(b) Elements of Engineering disciplines

4.(a) Discuss in detail about the testing axioms

1. It’s Impossible to Test a Program Completely


Due to four key reasons:
 The number of possible inputs is very large.
 The number of possible outputs is very large.
 The number of paths through the software is very large.
 The software specification is subjective
Ex: Microsoft Windows Calculator
Assume that you are assigned to test the Windows Calculator. You decide
to start with addition. You try 1+0=. You get an answer of 1. That’s correct.
Then you try 1+1=. You get 2. How far do you go? The calculator accepts a 32-
digit number, so you must try all the possibilities up to
1+99999999999999999999999999999999=
Once you complete that series, you can move on to 2+0=, 2+1=, 2+2=, and so
on. Eventually you’ll get to
99999999999999999999999999999999+9999999999999999999999999999999
9=
Next you should try all the decimal values: 1.0+0.1, 1.0+0.2, and so on.
it’s impossible to completely test a program, even software as simple as a
calculator. If you decide to eliminate any of the test conditions because you feel
they’re redundant or unnecessary, or just to save time, you’ve decided not to
test the program completely.

2. Software Testing Is a Risk-Based Exercise


One key concept that software testers need to learn is how to reduce the
huge domain of possible tests into a manageable set, and how to make wise
risk-based decisions on what’s important to test and what’s not.
Above graph shows the relationship between the amount of testing
performed and the number of bugs found. If you attempt to test everything, the
costs go up dramatically and the number of missed bugs declines to the point
that it’s no longer cost effective to continue.
If you cut the testing short or make poor decisions of what to test, the costs are
low but you’ll miss a lot of bugs. The goal is to hit that optimal amount of
testing so that you don’t test too much or too little.

3. Testing Can’t Show That Bugs Don’t Exist


You’re an exterminator charged with examining a house for bugs. You
inspect the house and find evidence of bugs

House1 :

Findings :—maybe live bugs, dead bugs, or nests.


Conclusion :- You can safely say that the house has bugs.

House2 :
Findings :- no evidence of bugs. e no signs of an infestation.
Maybe you find a few dead bugs or old nests but you see nothing that tells you
that live bugs exist.
Conclusion : your search you didn’t find any live bugs. Unless you completely
dismantled the house down to the foundation, you can’t be sure that you didn’t
simply just miss them.

Software testing works exactly as the exterminator does. It can show that
bugs exist, but it can’t show that bugs don’t exist. You can perform your tests,
find and report bugs, but at no point can you guarantee that there are no longer
any bugs to find.

4. The More Bugs You Find, the More Bugs There Are
Reasons
Programmers have bad days. Like all of us, programmers can have off
days. Code written one day may be perfect; code written another may be sloppy.
Programmers often make the same mistake. Everyone has habits. A programmer
who is prone to a certain error will often repeat it.
Some bugs are really just the tip of the iceberg. Very often the software’s design
or architecture has a fundamental problem. A tester will find several bugs that at
first may seem unrelated but eventually are discovered to have one primary
serious cause.

5. The Pesticide Paradox


The test process repeats each time around the loop. With each iteration,
the software testers receive the software for testing and run their tests.
Eventually, after several passes, all the bugs that those tests would find are
exposed. Continuing to run them won’t reveal anything new.
To overcome the pesticide paradox, software testers must continually write new
and different tests to exercise different parts of the program and find more bugs.

6. Not All the Bugs You Find Will Be Fixed


reasons why you might choose not to fix a bug:
• There’s not enough time. In every project there are always too many
software features, too few people to code and test them, and not enough room
left in the schedule to finish. If you’re working on a tax preparation program,
April 15 isn’t going to move— you must have your software ready in time.
• It’s really not a bug. Maybe you’ve heard the phrase, “It’s not a bug, it’s
a feature!” It’s not uncommon for misunderstandings, test errors, or spec
changes to result in would-be bugs being dismissed as features.
• It’s too risky to fix. You might make a bug fix that causes other bugs to
appear. Under the pressure to release a product under a tight schedule, it might
be too risky to change the software. It may be better to leave in the known bug
to avoid the risk of creating new, unknown ones.
• It’s just not worth it. This may sound harsh, but it’s reality. Bugs that
would occur infrequently or bugs that appear in little-used features may be
dismissed.

7. When a Bug’s a Bug Is Difficult to Say


rules to define a bug
1. The software doesn’t do something that the product specification says
it should do.
2. The software does something that the product specification says it
shouldn’t do.
3. The software does something that the product specification doesn’t
mention.
4. The software doesn’t do something that the product specification
doesn’t mention but should.
5. The software is difficult to understand, hard to use, slow, or—in the
software tester’s eyes—will be viewed by the end user as just plain not right.

8. Product Specifications Are Never Final


You’re halfway through the planned two year development cycle, and
your main competitor releases a product very similar to yours but with several
desirable features that your product doesn’t have.
Do you continue with your spec as is and release an inferior product in another
year?
Or, does your team regroup, rethink the product’s features, rewrite the product
spec, and work on a revised product?

9. Software Testers Aren’t the Most Popular Members of a Project Team


The goal of a software tester is to find bugs, find them as early as possible,
and make sure they get fixed.
Find bugs early.
Temper your enthusiasm
Don’t always report bad news

10. Software Testing Is a Disciplined Technical Profession


If software testers were used, they were frequently untrained and brought
into the project late to do some “ad-hoc banging on the code to see what they
might find.” Times have changed.
The software industry has progressed to the point where professional
software testers are mandatory. It’s now too costly to build bad software.

(b) Explain defect classification in detail.

5.(a)Write short notes on Origins of defects.

The term defect and its relationship to the terms error and failure in the
context of the software development domain
1.Education:
The software engineer did not have the proper educational background to
prepare the software artifact. She did not understand how to do something.
For example,
a software engineer who did not understand the precedence order of
operators in a particular programming language could inject a defect in an
equation that uses the operators for a calculation.

2. Communication:
The software engineer was not informed about something by a colleague.
For example,
if engineer 1 and engineer 2 are working on interfacing modules, and
engineer 1 does not inform engineer 2 that a no error checking code will appear
in the interfacing module he is developing, engineer 2 might make an incorrect
assumption relating to the presence/absence of an
error check, and a defect will result.

3. Oversight:
The software engineer omitted to do something. For example, a software
engineer might omit an initialization statement.

4. Transcription:
The software engineer knows what to do, but makes a mistake in doing it.
A simple example is a variable name being misspelled when entering the code.

5. Process:
The process used by the software engineer misdirected her actions.
For example,
A development process that did not allow sufficient time for a detailed
specification to be developed and reviewed could lead to specification defects.

(b) Explain the various origins of defects. Explain the major classes of
defects in the software artifacts

A successful test will reveal the problem and the doctor can begin
treatment. Completing the analogy of doctor and ill patient, one could view
defective software as the ill patient. Testers as doctors need to have knowledge
about possible defects (illnesses) in order to develop defect hypotheses. They
use the hypotheses to:
 design test cases;
 design test procedures;
 assemble test sets;
 select the testing levels (unit, integration, etc.) appropriate
for the tests;
 evaluate the results of the tests.

Physical defects in the digital world may be due to manufacturing errors,


component wear-out, and/or environmental effects.

6. Short notes on
(a) Precision and accuracy.

(b) Verification and validation

Validation is the process of evaluating a software system or components


during or at the end of, the development cycle in order to determine whether it
satisfies specified requirements
Validation is usually associated with traditional execution based testing that is
exercising the code with test cases
Verification is the process of evaluating a software system or component
to determine whether the products of a given development phase satisfy the
conditions imposed at the start of that phase. Verification is usually associated
with activities such as inspections and reviews of software deliverables.

VERIFICATION VALIDATION
Validation is a dynamic mechanism
Verification is a static practice of of validating and testing the actual
verifying documents, design, code product.
and program.

It does not involve executing the code It always involves executing the code

It is human based checking of It is computer based execution of


documents and files. program.

Verification uses methods like Validation uses methods like black


inspections, reviews, walkthroughs, box (functional) testing, gray box
and Desk-checking etc. testing, and white box (structural)
testing etc.
Verification is to check whether the Validation is to check whether
software conforms to specifications. software meets the customer
expectations and requirements.

It can catch errors that validation It can catch errors that verification
cannot catch. It is low level exercise. cannot catch. It is High Level
Exercise

It generally comes first-done before It generally follows after verification.


validation.

Verification is done by QA team to Validation is carried out with the


ensure that the software is as per the involvement of testing team.
specifications in the SRS document.

7.(a) Explain in detail about defect repository.

A defect repository can help to support achievements and continuous


implementation of several TMM maturity goals including controlling and
monitoring of test, software quality evaluation and control ,test measurements,
and test process improvement.

It is important if you are a member of a test organization to illustrate to


management and colleagues the benefit of developing a defect repository to
store defect information. Software Engineers and Test Specialists we should
follow the example of engineers in other disciplines who realized the usefulness
of defect data. Defect monitoring should continue for each ongoing projects.
The distribution of defects will change as you make changes in your processes.
The defect data is useful for test planning, a TMM level 2 maturity goals. It
helps you to select applicable testing techniques, design and the test cases you
need and allocate the amount of resources you will need to devote to detecting
and removing these defects.
(b) Analyze the Role of process in Software quality

Process in this context is defined below, and is illustrated in Figure 1.2.

Process, in the software engineering domain, is the set of methods, practices,


standards, documents, activities, policies, and procedures that software
engineers use to develop and maintain a software system and its associated
artifacts, such as project and test plans, design documents, code, and manuals.
A software process is a set of activities and associated results which
produces a software product. These activities are,
1. Software specification  The functionality of software and constraints
on its operation must be defined.
2. Software development  The software to meet the specification must
be produced.
3. Software validation  The software must be validated to ensure that it
does what the customer wants.
4. Software evolution  The software must evolve to meet changing
customer needs.

The software development process, like most engineering artifacts, must


be engineered. That is, it must be designed, implemented, evaluated, and
maintained.

Engineering is the application of scientific, economic, social, and


practical knowledge in order to design, build, and maintain structures, machines,
devices, systems, materials and processes.
One who practices engineering is called an engineer, and those licensed
to do so may have more formal designations such as Professional Engineer.

All the software process improvement models that have had wide
acceptance in industry are high-level models, in the sense that they focus on the
software process as a whole and do not offer adequate support to evaluate and
improve specific software development sub processes such as design and testing.
In spite of its vital role in the production of quality software, existing
process evaluation and improvement models such as the CMM, Bootstrap, and
ISO-9000 have not adequately addressed testing process issues. The Testing
Maturity Model (TMM), has been developed at the Illinois Institute of
Technology by a research group, to address deficiencies these areas.

8. Why it is important to meticulously inspect test result and discover the


drawbacks incase if you fail to inspect? Illustrate with example? (13)

Human errors can cause a defect or failure at any stage of the software
development lifecycle. The results are classified as trivial or catastrophic,
depending on the consequences of the error.
The requirement of rigorous testing and their associated documentation during
the software development life cycle arises because of the below reasons:
 To identify defects
 To reduce flaws in the component or system
 Increase the overall quality of the system
There can also be a requirement to perform software testing to comply with
legal requirements or industry-specific standards. These standards and rules can
specify what kind of techniques should we use for product development. For
example, the motor, avionics, medical, and pharmaceutical industries, etc., all
have standards covering the testing of the product.
The points below shows the significance of testing for a reliable and easy to use
software product:
 The testing is important since it discovers defects/bugs before the
delivery to the client, which guarantees the quality of the software.
 It makes the software more reliable and easy to use.
 Thoroughly tested software ensures reliable and high-performance
software operation.
Testers need to carefully inspect and interpret test results. Several erroneous
and costly scenarios may occur if care is not taken.
Example:
A failure may be overlooked, and the test may be granted a ―pass status
when in reality the software has failed the test. Testing may continue based on
erroneous test results. The defect may be revealed at some later stage of testing,
but in that case it may be more costly and difficult to locate and repair. A failure
may be suspected when in reality none exists. In this case the test may be
granted a ―fail status. Much time and effort may be spent on trying to find the
defect that does not exist. A careful re-examination of the test results could
finally indicate that no failure has occurred.

9. Give an Overview of the Testing Maturity Model (TMM) & the test
related activities that should be done for V-model architecture.

The internal structure of the TMM is rich in testing practices that can be
learned and applied in a systematic way to support a quality testing process that
improves in incremental steps. There are five levels in the TMM that prescribe a
maturity hierarchy and an evolutionary path to test
process improvement.

Each level with the exception of level 1 has a structure that consists of the
following:
 A set of maturity goals. The maturity goals identify testing improvement
goals that must be addressed in order to achieve maturity at that level. To
be placed at a level, an organization must satisfy the maturity goals at
that level. The TMM levels and associated maturity goals.
 Supporting maturity sub goals. They define the scope, boundaries and
needed accomplishments for a particular level.
 Activities, tasks and responsibilities (ATR). The ATRs address
implementation and organizational adaptation issues at each TMM level.
Supporting activities and tasks are identified, and responsibilities are
assigned to appropriate groups.
Level 1—Initial: (No maturity goals)
At TMM level 1, testing is a chaotic process; it is ill-defined, and not
distinguished from debugging. A documented set of specifications for software
behavior often does not exist. Tests are developed in an ad hoc way after coding
is completed. Testing and debugging are interleaved to get the bugs out of the
software

Level 2—Phase Definition:

Goal 1: Develop testing and debugging goals;


Goal 2: Initiate a testing planning process;
Goal 3: Institutionalize basic testing techniques and methods
At level 2 of the TMM testing is separated from debugging and is defined
as a phase that follows coding. It is a planned activity; however, test planning at
level 2 may occur after coding for reasons related to the immaturity of the
testing process. For example, there may be the perception at level 2, that all
testing is execution based and dependent on the code; therefore, it should be
planned only when the code is complete.

Level 3—Integration:

Goal 1: Establish a software test organization;


Goal 2: Establish a technical training program;
Goal 3: Integrate testing into the software life cycle;
Goal 4: Control and monitor testing

At TMM level 3, testing is no longer a phase that follows coding, but is


integrated into the entire software life cycle. Organizations can build on the test
planning skills they have acquired at level 2. Unlike level 2, planning for testing
at TMM level 3 begins at the requirements phase and continues throughout the
life cycle supported by a version of the V-model.

Level 4—Management and Measurement:

Goal 1: Establish an organization wide review program;


Goal 2: Establish a test measurement program;
Goal 3: Software quality evaluation

Testing at level 4 becomes a process that is measured and quantified.


Reviews at all phases of the development process are now recognized as
testing/quality control activities. They are a compliment to execution based tests
to detect defects and to evaluate and improve software quality.

Level 5—Optimization/Defect Prevention/Quality Control:

Goal 1: Defect prevention;


Goal 2: Quality control;
Goal 3: Test process optimization

Because of the infrastructure that is in place through achievement of the


maturity goals at levels 1–4 of the TMM, the testing process is now said to be
defined and managed; its cost and effectiveness can be monitored .At level 5,
mechanisms are in place so that testing can be fine-tuned and continuously
improved. Defect prevention and quality control are practiced. Statistical
sampling, measurements of confidence levels, trust worthiness, and reliability
drive the testing process. Automated tools totally support the running and
rerunning of test cases.

Extension of V- model

10.(a) Describe the various software testing activities

(b) Define correctness, reliability, integrity, interoperability. Discuss how


these are related to testing

The various factors, which influence the software, are termed as software
factors. They can be broadly divided into two categories. The first category of
the factors is of those that can be measured directly such as the number of
logical errors, and the second category clubs those factors which can be
measured only indirectly.
For example,
Maintainability but each of the factors is to be measured to check for the
content and the quality control. Several models of software quality factors and
their categorization have been suggested over the years.
Quality relates to the degree to which a system, system component, or process
meets
 specified requirements.
 customer or user needs, or expectations.
We can measure the degree to which the software possess a given quality
attribute with quality metrics.
 A metric is a quantitative measure of the degree to which a system,
system component, or process possesses a given attribute
 A quality metric is a quantitative measurement of the degree to
which an item possesses a given quality attribute

Correctness
These requirements deal with the correctness of the output of the software
system. They include
 Output mission
 The required accuracy of output that can be negatively affected by
inaccurate data or inaccurate calculations.
 The completeness of the output information, which can be affected by
incomplete data.
 The up-to-dateness of the information defined as the time between the
event and the response by the software system.
 The availability of the information.
 The standards for coding and documenting the software system.

Reliability
Reliability requirements deal with service failure. They determine the
maximum allowed failure rate of the software system, and can refer to the entire
system or to one or more of its separate functions.

Integrity
This factor deals with the software system security, that is, to prevent
access to unauthorized persons, also to distinguish between the group of people
to be given read as well as write permit.

Interoperability
Interoperability requirements focus on creating interfaces with other
software systems or with other equipment firmware. For example, the firmware
of the production machinery and testing equipment interfaces with the
production control software.
11(a) Why it is necessary to develop test cases for both valid and invalid
input condition?

Test cases should be developed for both valid and invalid input conditions.
 The tester must not assume that the software under test will always be
provided with valid inputs.
 Inputs may be incorrect for several reasons.
 Use of test cases that are based on invalid inputs is very useful for
revealing defects since they may exercise the code in unexpected ways
and identify unexpected software behavior.
 Invalid inputs also help developers and Software Test Engineers to
evaluate the robustness of the software, that is, its ability to recover when
unexpected events occur (in this case an erroneous input).
Example:
Software users may have misunderstandings, or lack information about
the nature of the inputs. They often make typographical errors even when
compute / correct information are available. Device may also provide invalid
inputs due to erroneous conditions and malfunctions.

Principle 5 (Test cases should be developed for both valid and invalid
input conditions) supports the need for the independent test group called for in
Principle 7 (Testing should be carried out by a group that is independent of the
development group.)for the following reason. The developer of a software
component may be biased in the selection of test inputs for the component and
specify only valid inputs in the test cases to demonstrate that the software works
correctly. An independent tester is more apt to select invalid inputs as well.

(b) How important to document a product? How will you test requirement
and design document?

12. Compare and contrast terms errors faults and failures using suitable
examples
ERROR:
MAIN DIFFERENCE:
13. Write the major needs of testing and model of testing in details

14. Explain in detail processing and monitoring of the defects with defect
repository?

PART – C

1. Explain in detail how developer / tester support to develop a defect


repository?

 Testing is sometimes erroneously viewed as a destructive activity. The


tester’s job is to reveal
defects, find weak points, inconsistent behavior, and circumstances where the
software does
not work as expected.
 As a tester you need to be comfortable with this role. Given the nature of
the tester’s tasks,
you can see that it is difficult for developers to effectively test their own code
Teams of
testers and developers are very common in industry, and projects should have
an appropriate
developer/tester ratio.
 The ratio will vary depending on available resources, type of project, and
TMM level. For
example, an embedded real-time system needs to have a lower developer/tester
ratio (for
example, 2/1) than a simple data base application (4/1 may be suitable).
 At higher TMM levels where there is a well-defined testing group, the
developer/ tester ratio
would tend to be on the lower end (for example 2/1 versus 4/1) because of the
availability of tester resources. In addition to cooperating with code developers,
testers also need to work alongside with requirements engineers to ensure that
requirements are testable, and to plan for system and acceptance test (clients are
also involved in the latter).
 Testers also need to work with designers to plan for integration and unit
test. In addition, test
managers will need to cooperate with project managers in order to develop
reasonable test plans, and with upper management to provide input for the
development and maintenance of organizational testing standards, polices, and
goals.
 Finally, testers also need to cooperate with software quality assurance
staff and software
engineering process group members. In view of these requirements for multiple
working relationships, communication and team working skills are necessary
for a successful career as
a tester.
 Testers are specialists, their main function is to plan, execute, record, and
analyze tests. They
do not debug software.
 When defects are detected during testing, software should be returned to
the developers who
locate the defect and repair the code. The developers have a detailed
understanding of the code, and are the best qualified staff to perform debugging.
 Finally, testers need the support of management. Developers, analysts,
and marketing staff
need to realize that testers add value to a software product in that they detect
defects and evaluate quality as early as possible in the software life cycle.
 This ensures that developers release code with few or no defects, and that
marketers can
deliver software that satisfies the customers’ requirements, and is reliable,
usable, and correct.

Test Engineers are usually responsible for:


 Developing test cases and procedures
 Software testers need to develop test matrices to control the design
of test cases.
 Software Testers need to design test cases based on effective
testing techniques.
 Software testers need to design procedures based on the project
needs.

 Test data planning, capture, and conditioning


 Software testers need to plan test data to be used during test
execution.
 Reviewing analysis and design artifacts
 Software testers need to review and analyze:
 Requirement documents.
 Functional Documents.
 Design Documents.

 Test execution
 Software testers are responsible for test execution based on testing
milestones.

 Utilizing automated test tools for regression testing


 Software testers are responsible to learn automated testing tools to
simplify regression testing.

 Preparing test documentation


 Software testers need to prepare any necessary test ware during the
project:
1. Procedures.
2. Guidelines.
 Defect tracking and reporting
 Software testers are responsible to:
1. Find Defects.
2. Report defects.
3. Verify and validate defect fixes.
4. Other testers joining the team will focus on:
4.1 Test execution.
4.2 Defect reporting.
4.3 Regression testing.
The test team should be represented in:
All key requirements.
Design meetings, including:
JAD or requirements definition sessions.
Risk analysis sessions.
Prototype review sessions

2. Discuss the tester role in software development organization


3. Suppose you are testing defect coin problem artifacts, Identify the causes
of various defects. What steps could have been taken to prevent the
various classes of defects?
design defects could propagate to the code. Here additional defects have been
introduced in the coding phase.

Control, logic, and sequence defects. These include the loop variable
increment step which is outof the scope of the loop. Note that incorrect loop
condition (i _ 6) is carried over from design and should be counted as a design
defect.

Algorithmic and processin g defects. The division operator may cause


problems if negative values are divided, although this problem could be
eliminated with an input check.
Data Flow defects. The variable total_coin_value is not initialized. It is used
before it is defined. (This might also be considered a data defect.)
Data Defects. The error in initializing the array coin_values is carried over
from design and should be counted as a design defect.
External Hardware, Software Interface Defects. The call to the external func
tion ―scanf‖ is incorrect. The address of the variable must be provided
(&number_of_coins).
Code Documentation Defects. The documentation that accompanies this code
is incomplete and ambiguous. It reflects the deficiencies in the external interface
description and other defects that occurred during speci fication and design.
Vital information is missing for anyone who will need to repair, maintain or
reuse this code.

The poor quality of this small program is due to defects injected during several
of the life cycle phases with probable causes ranging from lack of education, a
poor process, to oversight on the part of the designers and developers. Even
though it implements a simple function the program is unusable because of the
nature of the defects it contains. Such software is not acceptable to users; as
testers we must make use of all our static and dynamic testing tools as described
in subsequent chapters to ensure that such poor-quality software is not delivered
to our user/client group. We must work with analysts, designers and code
developers to ensure that quality issues are addressed early the software life
cycle. We must also catalog defects and try to eliminate them by improving
education, training, communication, and process.
4. Give the internal structure of TMM and explain about its maturity goals
at each level.

Testing Maturity Model - Definition


TMM is a learning tools ,or framework to learn about testing. It
introduces both the technical and managerial aspects of testing. It evolve testing
process both in the personal and organizational levels. It follows staged
architecture for process improvement models.
It has five levels that prescribe a maturity hierarchy and an evolutionary path to
test process improvement. Each level has ( Except Level 1)
A set of maturity goals - The maturity goals identify testing improvement goals
that must be addressed in order to achieve maturity at that level.
Supporting maturity subgoals - They define the scope, boundaries and needed
accomplishments for a particular level.
Activities, tasks and responsibilities (ATR) - address implementation and
organizational adaptation issues at each TMM level. Supporting activities and
tasks are identified, and responsibilities are assigned to appropriate groups.

Internal Structure of TMM maturity model

Testing Maturity Model - 5-level structure


Level 1—Initial: (No maturity goals)
 testing is a chaotic process; it is ill-defined
 Not distinguished from debugging.
 The objective of testing is to show the software works
 Software products are often released without quality assurance.
 lack of resources, tools and properly trained staff.

Level 2—Phase Definition:

Goal 1: Develop testing and debugging goals;


Goal 2: Initiate a testing planning process;
Goal 3: Institutionalize basic testing techniques and methods
 testing is separated from debugging and is defined as a phase that follows
coding.
 It is a planned activity; however, test planning at level 2 may occur after
coding for reasons related to the immaturity of the testing process.
 use of black box and white box testing strategies, and a validation cross-
reference matrix
 Testing is multileveled - unit, integration, system, and acceptance levels.

Level 3—Integration

Goal 1: Establish a software test organization;


Goal 2: Establish a technical training program;
Goal 3: Integrate testing into the software life cycle;
Goal 4: Control and monitor testing
 testing is integrated into the entire software life cycle
 There is a test organization, and testing is recognized as a professional
activity.
 There is a technical training organization with a testing focus
 Testing is monitored to ensure it is going according to plan and actions
can be taken if deviations occur

Level 4—Management and Measurement

Goal 1: Establish an organization wide review program;


Goal 2: Establish a test measurement program;
Goal 3: Software quality evaluation
 process that is measured and quantified. Reviews at all phases of the
development process are now recognized as testing/quality control
activities.
 Software products are tested for quality attributes such as reliability,
usability, and maintainability.
 Test cases from all projects are collected and recorded in a test case
database for the purpose of test case reuse and regression testing. Defects
are logged and given a severity level.
 Some of the deficiencies occurring in the test process are due to the lack
of a defect prevention philosophy. An extension of the V-model as shown
in Figure can be used to support the implementation of this goal

Level 5—Optimization/Defect Prevention/Quality Control

Goal 1: Defect prevention;


Goal 2: Quality control;
Goal 3: Test process optimization
 the testing process is now said to be defined and managed; its cost and
effectiveness can be monitored. Defect prevention and quality control are
practiced. Automated tools totally support the running and rerunning of
test cases
UNIT II - TEST CASE DESIGN STRATEGIES
Part-A
1.List the advantages of Equivalence class partitioning.
(i).It is process-oriented
(ii).We can achieve the Minimum test coverage
(iii).It helps to decrease the general test execution time and also reduce the
set of test data

2. Show the need of code functional testing in test case design.

o It produces a defect-free product.


o It ensures that the customer is satisfied.
o It ensures that all requirements met.
o It ensures the proper working of all the functionality of an
application/software/product.
o It ensures that the software/ product work as expected.
o It ensures security and safety.
o It improves the quality of the product.

3. Create the equivalence classes in testing the program for quadratic equation

Solution.
4.Write the two basic testing strategies used to design test cases.
(i).Black box testing
(ii).White box testing

5.Define COTS components.


The reusable component may come from a code reuse
library within their org or, as is most likely, from an outside vendor
who specializes in the development of specific types of software
components. Components produced by vendor org are known as
commercial off-the shelf, or COTS, components

6.List some of the advantages of documentation testing and domain testing.


Documentation testing:
(i).user documentation testing aids in highlighting problems over-looked
during reviews.
(ii).High Quality user documentation ensures consistency of documentation
and product,thus minimizing possible defects reported by costumers.
(iii).New programmers and testers who join a project group can use the
documentation to learn the external functionality of the product.
7.Compare black box and white box testing.

Criteria Black Box Testing White Box Testing

Definition Black Box Testing is a White Box Testing is a


software testing method in software testing method in
which the internal structure/ which the internal structure/
design/ implementation of design/ implementation of
the item being tested is NOT the item being tested is
known to the tester. known to the tester.

Levels Mainly applicable to higher Mainly applicable to lower


Applicable To levels of testing: Acceptance levels of testing: Unit
Testing & System Testing Testing & Integration Testing

Responsibility Generally, independent Generally, Software


Software Testers Developers

Programming Not Required Required


Knowledge

Implementation Not Required Required


Knowledge

Basis for Test Requirement Specifications Detail design


Cases
8. Tell the steps involved in developing test cases with a cause- and- effect graph.
 Identify and describe the input conditions (causes) and actions
(effect).
 Build up a cause-effect graph.
 Convert cause-effect graph into a decision table.
 Convert decision table rules to test cases where each column of the
decision table represents a test case.
9. Tabulate the black box methods and knowledge sources.
(i).Equivalence Class Testing
(ii).Boundary Value Testing
(iii).Decision Table Testing
(iv).Case-Effect Testing
(v).Use case Testing
10.Can you classify the compatibility testing and explain?
11.How mutation testing helpful in testing the software?
Mutation Testing is a type of software testing in which certain
statements of the source code are changed/mutated to check if the
test cases are able to find errors in source code. The goal of Mutation
Testing is ensuring the quality of test cases in terms of robustness
that it should fail the mutated source code.
Mutation Testing is also called Fault-based testing strategy as it
involves creating a fault in the program and it is a type of White Box
Testing which is mainly used for Unit Testing.

12.Define code complexity testing .How it is related to testing?

Cyclomatic complexity is a software metric used to indicate the


complexity of a program. It is a quantitative measure of the
number of linearly independent paths through a program's source
code. Broadly speaking, cyclomatic complexity is derived by
counting the number of potential paths through the system
(typically at the method level). Originally designed to estimate
the number of unit tests a method needs, cyclomatic complexity is
built into a lot of metric tools and static analysis tools.

13.Point out the difference of static testing from structural testing

14.What do you meant by test adequacy criteria?.

A test adequacy criterion is a predicate that is true (satisfied) or false


(not satisfied) of a 〈program, test suite〉 pair. Usually a test adequacy
criterion is expressed in the form of a rule for deriving a set of test
obligations from another artifact, such as a program or specification.

15.List white box knowledge source and testing methods


Source
 Control flow testing.
 Data flow testing.
 Branch testing.
 Statement coverage.
 Decision coverage.
 Modified condition/decision coverage.
 Prime path testing.
 Path testing.
Testing methods
 Unit Testing.
 Static Analysis.
 Dynamic Analysis.
 Statement Coverage.
 Branch testing Coverage.
 Security Testing.
 Mutation Testing.
16.What is boundary value analysis?
Boundary-value analysis is a software testing technique in
which tests are designed to include representatives of
boundary values in a range. The idea comes from the boundary.
Given that we have a set of test vectors to test the system, a
topology can be defined on that set.

17.Discuss about Desk checking


A desk check is an informal non-computerized or manual
process for verifying the programming and logic of an
algorithm before the program is launched. A desk check helps
programmers to find bugs and errors which would prevent the
application from functioning properly.

18.Sketch the control flow graph for an ATM withdrawal system.


19.How would you calculate cyclomatic complexity?
1. Cyclomatic Complexity.
2. Cyclomatic Complexity Measures.
3. Method 1: Total number of regions in the flow graph is a
Cyclomatic complexity.
4. Method 2: The Cyclomatic complexity, V (G) for a flow graph G
can be defined as.
5. V (G) = E - N + 2.
6. Method 3: The Cyclomatic complexity V (G) for a flow graph G
can be defined as.
20.What are the factors affecting less than 100% degree of coverage
 The nature of the unit  Some statements/branches may not be reachable.
 The unit may be simple, and not mission, or safety, critical, and so
complete coverage is thought to be unnecessary.
 The lack of resources  The time set aside for testing is not adequate to
achieve complete coverage for all of the units.  There is a lack of
tools to support complete coverage
 Other project related issues such as timing, scheduling. And marketing
constraints.

PART-B
1.Explain about the following methods of black box testing with example
(a) Equivalence class partitioning(6)
(b) Boundary value analysis(7)

(a).Equivalence Partitioning Technique


Equivalence partitioning is a technique of software testing in which
input data is divided into partitions of valid and invalid values, and it
is mandatory that all partitions must exhibit the same behavior. If a
condition of one partition is true, then the condition of another equal
partition must also be true, and if a condition of one partition is false,
then the condition of another equal partition must also be false. The
principle of equivalence partitioning is, test cases should be designed
to cover each partition at least once. Each value of every equal
partition must exhibit the same behavior as other.

The equivalence partitions are derived from requirements and


specifications of the software. The advantage of this approach is, it
helps to reduce the time of testing due to a smaller number of test
cases from infinite to finite. It is applicable at all levels of the testing
process.

Examples of Equivalence Partitioning technique


Assume that there is a function of a software application that accepts
a particular number of digits, not greater and less than that particular
number. For example, an OTP number which contains only six digits,
less or more than six digits will not be accepted, and the application
will redirect the user to the error page.

1. OTP Number = 6 digits


2.A function of the software application accepts a 10 digit mobile number.
Mobile number = 10 digits

In both examples, we can see that there is a partition of two equally


valid and invalid partitions, on applying valid value such as OTP of six
digits in the first example and mobile number of 10 digits in the
second example, both valid partitions behave same, i.e. redirected to
the next page.

Another two partitions contain invalid values such as 5 or less than 5


and 7 or more than 7 digits in the first example and 9 or less than 9
and 11 or more than 11 digits in the second example, and on
applying these invalid values, both invalid partitions behave same, i.e.
redirected to the error page.
We can see in the example, there are only three test cases for each
example and that is also the principal of equivalence partitioning
which states that this method intended to reduce the number of test
cases.

(b).Boundary Value Analysis


Boundary value analysis is one of the widely used case design
technique for black box testing. It is used to test boundary values
because the input values near the boundary have higher chances of
error.

Whenever we do the testing by boundary value analysis, the tester


focuses on, while entering boundary value whether the software is
producing correct output or not.

Boundary values are those that contain the upper and lower limit of a
variable. Assume that, age is a variable of any function, and its
minimum value is 18 and the maximum value is 30, both 18 and 30
will be considered as boundary values.

The basic assumption of boundary value analysis is, the test cases
that are created using boundary values are most likely to cause an
error.

There is 18 and 30 are the boundary values that's why tester pays
more attention to these values, but this doesn't mean that the middle
values like 19, 20, 21, 27, 29 are ignored. Test cases are developed
for each and every value of the range.
2.Write a note on the following
(a) Positive and Negative Testing (6)
(b) Decision Tables.(7)
(a).Postive testing:
Positive testing tries to prove that a given product does what it is
supposed to do . When a test case verifies the requirements of the
product with a set of expected output , it is called positive test case .
the purpose of positive testing is to prove that the product works as
per specification and expectations. A product delivering an error when
it is expected to give an error , is also a part of positive testing.
Positive testing can thus be said to check the product’s
behaviour for positive and negative conditions as stated in the
requirements.

Reg.No. Input 1 Input 2 Current Expected


state output
BR-01 Key 123- Turn clockwise unlocked Locked
456
BR-02 Key 123- Turn clockwise locked No change
456
BR-03 Key 123- Turn Unlocked No change
456 anticlockwise
BR-04 Key 123- Turn locked Unlock
456 anticlockwise
BR-05 Hairpin Turn clockwise locked No change
Negative testing:

Software testing is all about checking the application whether it is


working according to the given requirement or not. We may have to
use various software testing types like functional testing, Unit
testing
, Integration system, System testing
, smoke testing
, regression testing
, and sanity testing
to complete the process.

Software development is not an easy take to complete because it is


all about writing extensive and complex codes and then testing these
composite codes to guarantee faultless and constant performance.

(b).Decision Testing:

A Decision Table is a tabular representation of inputs versus


rules/cases/test conditions. It is a very effective tool used for both
complex software testing and requirements management. Decision
table helps to check all possible combinations of conditions for testing
and testers can also identify missed conditions easily. The conditions
are indicated as True(T) and False(F) values.

Example:

The condition is simple if the user provides correct username and


password the user will be redirected to the homepage. If any of the
input is wrong, an error message will be displayed.
Conditions Rule 1 Rule 2 Rule
Username (T/F) F T F
Password (T/F) F F T
Output (E/H) E E E

Legend:
 T – Correct username/password
 F – Wrong username/password
 E – Error message is displayed
 H – Home screen is displayed

Interpretation:

 Case 1 – Username and password both were wrong. The user is shown
an error message.
 Case 2 – Username was correct, but the password was wrong. The user
is shown an error message.
 Case 3 – Username was wrong, but the password was correct. The user
is shown an error message.
 Case 4 – Username and password both were correct, and the user
navigated to homepage

While converting this to test case, we can create 2 scenarios ,

 Enter correct username and correct password and click on login, and
the expected result will be the user should be navigated to homepage

And one from the below scenario

 Enter wrong username and wrong password and click on login, and the
expected result will be the user should get an error message
 Enter correct username and wrong password and click on login, and the
expected result will be the user should get an error message
 Enter wrong username and correct password and click on login, and the
expected result will be the user should get an error message

3.Write short notes on the list given below


(a) Compatibility testing.(6)
(b) Documentation testing.(7)
(a).Compatibility testing:
Checking the functionality of an application on different
software, hardware platforms, network, and browsers is known as
compatibility testing.
Types:

o Software
o Hardware
o Network
o Mobile

Software

Here, software means different operating systems (Linux, Window,


and Mac) and also check the software compatibility on the various
versions of the operating systems like Win98, Window 7, Window 10,
Vista, Window XP, Window 8, UNIX, Ubuntu, and Mac.

And, we have two types of version compatibility testing, which are as


follows:

o Forward Compatibility Testing: Test the software or


application on the new or latest versions.
For example: Latest Version of the platforms (software)
Win 7 → Win 8 → Win 8.1 → Win 10
o Backward Compatibility Testing: Test the software or
application on the old or previous versions.
For example:
Window XP → Vista → Win 7 → Win 8 → Win 8.1
And different browsers like Google Chrome, Firefox, and Internet
Explorer, etc.

Hardware

The application is compatible with different sizes such as RAM, hard


disk, processor, and the graphic card, etc.

Mobile

Check that the application is compatible with mobile platforms such


as iOS, Android, etc.

Network

Checking the compatibility of the software in the different network


parameters such as operating speed, bandwidth, and capacity.

(b) Documentation testing.(7)

Test documentation is documentation of artifacts created before or


during the testing of software. It helps the testing team to estimate
testing effort needed, test coverage, resource tracking, execution
progress, etc. It is a complete suite of documents that allows you to
describe and document test planning, test design, test execution, test
results that are drawn from the testing activity.

Types of Testing
Description
Documents
It is a high-level document which describes principles, methods
Test policy
testing goals of the organization.
A high-level document which identifies the Test Levels (types) t
Test strategy
project.
A test plan is a complete planning document which contains the
Test plan
resources, schedule, etc. of testing activities.
Requirements
This is a document which connects the requirements to the test c
Traceability Matrix
Test scenario is an item or event of a software system which cou
Test Scenario
more Test cases.
It is a group of input values, execution preconditions, expected e
Test case
and results. It is developed for a Test Scenario.
Test Data Test Data is a data which exists before a test is executed. It used
Defect report is a documented report of any flaw in a Software S
Defect Report
perform its expected function.
Test summary report is a high-level document which summarize
Test summary report
conducted as well as the test result.

4.With suitable example describe how cause-and–effect graphing and state


transition testing is done. (13)

Cause and Effect Graph in Black box


Testing
Cause-effect graph comes under the black box testing technique which
underlines the relationship between a given result and all the factors affecting
the result. It is used to write dynamic test cases.

The dynamic test cases are used when code works dynamically based on user
input. For example, while using email account, on entering valid email, the
system accepts it but, when you enter invalid email, it throws an error message.
In this technique, the input conditions are assigned with causes and the result of
these input conditions with effects.

Cause-Effect graph technique is based on a collection of requirements and used


to determine minimum possible test cases which can cover a maximum test area
of the software.

The main advantage of cause-effect graph testing is, it reduces the time of test
execution and cost.

This technique aims to reduce the number of test cases but still covers all
necessary test cases with maximum coverage to achieve the desired application
quality.

Cause-Effect graph technique converts the requirements specification into a


logical relationship between the input and output conditions by using logical
operators like AND, OR and NOT.

Notations used in the Cause-Effect Graph


AND - E1 is an effect and C1 and C2 are the causes. If both C1 and C2 are true,
then effect E1 will be true.

OR - If any cause from C1 and C2 is true, then effect E1 will be true.

NOT - If cause C1 is false, then effect E1 will be true.

Mutually Exclusive - When only one cause is true.

Causes are:

o C1 - Character in column 1 is A
o C2 - Character in column 1 is B
o C3 - Character in column 2 is digit!

Effects:

o E1 - Update made (C1 OR C2) AND C3


o E2 - Displays Massage X (NOT C1 AND NOT C2)
o E3 - Displays Massage Y (NOT C3)

Where AND, OR, NOT are the logical gates.

State Transition Technique


The general meaning of state transition is, different forms of the same situation,
and according to the meaning, the state transition method does the same. It is
used to capture the behavior of the software application when different input
values are given to the same function.

We all use the ATMs, when we withdraw money from it, it displays account
details at last. Now we again do another transaction, then it again displays
account details, but the details displayed after the second transaction are
different from the first transaction, but both details are displayed by using the
same function of the ATM. So the same function was used here but each time
the output was different, this is called state transition. In the case of testing of a
software application, this method tests whether the function is following state
transition specifications on entering different inputs.

This applies to those types of applications that provide the specific number of
attempts to access the application such as the login function of an application
which gets locked after the specified number of incorrect attempts. Let's see in
detail, in the login function we use email and password, it gives a specific
number of attempts to access the application, after crossing the maximum
number of attempts it gets locked with an error message.

Let see in the diagram:

39.5M

828

Exception Handling in Java - Javatpoint

There is a login function of an application which provides a maximum three


number of attempts, and after exceeding three attempts, it will be directed to
an error page.
State transition table

STATE LOGIN VALIDATION REDIR

S1 First Attempt Invalid S2

S2 Second Attempt Invalid S3

S3 Third Attempt Invalid S5

S4 Home Page

S5 Error Page

In the above state transition table, we see that state S1 denotes first login
attempt. When the first attempt is invalid, the user will be directed to the
second attempt (state S2). If the second attempt is also invalid, then the user
will be directed to the third attempt (state S3). Now if the third and last attempt
is invalid, then the user will be directed to the error page (state S5).

But if the third attempt is valid, then it will be directed to the homepage (state
S4).

Let's see state transition table if third attempt is valid:

STATE LOGIN VALIDATION REDIR

S1 First Attempt Invalid S2

S2 Second Attempt Invalid S3

S3 Third Attempt Valid S4

S4 Home Page

S5 Error Page
By using the above state transition table we can perform testing of any software
application. We can make a state transition table by determining desired output,
and then exercise the software system to examine whether it is giving desired
output or not.

5.What approach would you use for testing strategies? Explain in detail.
Show how black box testing is performed in COTS components?

Testing strategies:

 Black box testing


 White box testing

Black box testing


Black box testing is a technique of software testing which examines the
functionality of software without peering into its internal structure or coding.
The primary source of black box testing is a specification of requirements that is
stated by the customer.

In this method, tester selects a function and gives input value to examine its
functionality, and checks whether the function is giving expected output or not.
If the function produces correct output, then it is passed in testing, otherwise
failed. The test team reports the result to the development team and then tests
the next function. After completing testing of all functions if there are severe
problems, then it is given back to the development team for correction.

Generic steps of black box testing


o The black box test is based on the specification of requirements, so it is
examined in the beginning.
o In the second step, the tester creates a positive test scenario and an adverse
test scenario by selecting valid and invalid input values to check that the
software is processing them correctly or incorrectly.
o In the third step, the tester develops various test cases such as decision table, all
pairs test, equivalent division, error estimation, cause-effect graph, etc.
o The fourth phase includes the execution of all test cases.
o In the fifth step, the tester compares the expected output against the actual
output.
o In the sixth and final step, if there is any flaw in the software, then it is cured and
tested again.

Test procedure
The test procedure of black box testing is a kind of process in which the tester
has specific knowledge about the software's work, and it develops test cases to
check the accuracy of the software's functionality.

It does not require programming knowledge of the software. All test cases are
designed by considering the input and output of a particular function.A tester
knows about the definite output of a particular input, but not about how the
result is arising. There are various techniques used in black box testing for
testing like decision table technique, boundary value analysis technique, state
transition, All-pair testing, cause-effect graph technique, equivalence
partitioning technique, error guessing technique, use case technique and user
story technique. All these techniques have been explained in detail within the
tutorial.

Test cases
Test cases are created considering the specification of the requirements. These
test cases are generally created from working descriptions of the software
including requirements, design parameters, and other specifications. For the
testing, the test designer selects both positive test scenario by taking valid input
values and adverse test scenario by taking invalid input values to determine the
correct output. Test cases are mainly designed for functional testing but can
also be used for non-functional testing. Test cases are designed by the testing
team, there is not any involvement of the development team of software.
Techniques Used in Black Box Testing
Decision Decision Table Technique is a systematic approach where various input combinations and
Table system behavior are captured in a tabular form. It is appropriate for the functions that have
Technique relationship between two and more than two inputs.

Boundary Boundary Value Technique is used to test boundary values, boundary values are those that
Value and lower limit of a variable. It tests, while entering boundary value whether the software is
Technique output or not.

State State Transition Technique is used to capture the behavior of the software application whe
Transition values are given to the same function. This applies to those types of applications that prov
Technique number of attempts to access the application.

All-pair All-pair testing Technique is used to test all the possible discrete combinations of values. T
Testing method is used for testing the application that uses checkbox input, radio button input, list
Technique

Cause- Cause-Effect Technique underlines the relationship between a given result and all the facto
Effect result.It is based on a collection of requirements.
Technique

Equivalence Equivalence partitioning is a technique of software testing in which input data divided into
Partitioning and invalid values, and it is mandatory that all partitions must exhibit the same behavior.
Technique

Error Error guessing is a technique in which there is no specific method for identifying the error.
Guessing experience of the test analyst, where the tester uses the experience to guess the problema
Technique software.
Use Case Use case Technique used to identify the test cases from the beginning to the end of the sy
Technique usage of the system. By using this technique, the test team creates a test scenario that can
software based on the functionality of each function from start to end.

6.Describe the following (a) State based testing(6) (b) Domain testing(7)
(a).State-based Testing:
• Natural representation with finitestate machines – States
correspond to certain values of the attributes – Transitions
correspond to methods
• FSM can be used as basis for testing – e.g. “drive” the class
through all transitions, and verify the response and the resulting
state

Four Parts Of State Transition Diagram


There are 4 main components of the State Transition Model as below

1) States that the software might get

2) Transition from one state to another

3) Events that origin a transition like closing a file or withdrawing money

4) Actions that result from a transition (an error message or being given the
cash.)

STATE BASED TESTING DIAGRAM:


State based testing Table:
Correct PIN Incorrect P
S1) Start S5 S2

S2) 1st attempt S5 S3

S3) 2nd attempt S5 S4

S4) 3rd attempt S5 S6

S5) Access Granted – –

S6) Account blocked – –

(b) Domain testing(7):

Domain testing is a kind of software testing process during which the software is
tested by giving a minimum number of inputs and evaluating its proper outputs
and it is specific to a particular domain. In the domain testing, we test the software
by giving the appropriate and inputs and checking for the expected outputs from
the domain perspective.
The Domain testing differs for every specific domain in order that we’d like to own
domain-specific knowledge so as to check a software.

Example:

Consider the below multiple inputs and proper output scenario

Consider a Halloween games activities for kids, 6 competitions which are laid out,
and tickets which have given in line with the age and gender input these ticketing
modules to be tested in for the entire functionality of games exhibition.

Based on the scenario, we have six scenarios supported on the age and the
competitions

1. Age > 5 and < 10 Boy should participate in halloween costumes.


2. Age > 5 and < 10, girl should participate in musical chair.
3. Age >10 and < 15, Boy should participate in volley ball.
4. Age >10 and < 15, girl should participate in social projects.
5. Age < 15, both boys and girls should participate in making halloween
cookies.
6. Age >15, both boys and girls should participate in halloween decorations.

7.What inference can you make from random testing, requirement based testing and
domain testing explains? (13)
8.Explain the various white box techniques with suitable test cases. (13)

White Box Testing


The box testing approach of software testing consists of black box testing and
white box testing. We are discussing here white box testing which also known as
glass box is testing, structural testing, clear box testing, open box testing
and transparent box testing.

The white box testing contains various tests, which are as follows:

o Path testing
o Loop testing
o Condition testing
o Testing based on the memory perspective
o Test performance of the program

Path testing
In the path testing, we will write the flow graphs and test all independent paths.
Here writing the flow graph implies that flow graphs are representing the flow
of the program and also show how every program is added with one another as
we can see in the below image:
Loop testing
In the loop testing, we will test the loops such as while, for, and do-
while, etc. and also check for ending condition if working correctly
and if the size of the conditions is enough.

Condition testing
In this, we will test all logical conditions for both true and false values; that is,
we will verify for both if and else condition.

For example:

1. if(condition) - true
2. {
3. …..
4. ……
5. ……
6. }
7. else - false
8. {
9. …..
10. ……
11. ……
12. }

The above program will work fine for both the conditions, which means that if
the condition is accurate, and then else should be false and conversely.

Testing based on the memory (size) perspective


The size of the code is increasing for the following reasons:

o The reuse of code is not there: let us take one example, where we have four
programs of the same application, and the first ten lines of the program are
similar. We can write these ten lines as a discrete function, and it should be
accessible by the above four programs as well. And also, if any bug is there, we
can modify the line of code in the function rather than the entire code.
o The developers use the logic that might be modified. If one programmer
writes code and the file size is up to 250kb, then another programmer could
write a similar code using the different logic, and the file size is up to 100kb.
o The developer declares so many functions and variables that might never be
used in any portion of the code. Therefore, the size of the program will increase.

For example,

1. Int a=15;
2. Int b=20;
3. String S= "Welcome";
4. ….
5. …..
6. …..
7. ….
8. …..
9. Int p=b;
10. Create user()
11. {
12. ……
13. ……
14. ….. 200's line of code
15. }

In the above code, we can see that the integer a has never been called
anywhere in the program, and also the function Create user has never been
called anywhere in the code. Therefore, it leads us to memory consumption.

We cannot remember this type of mistake manually by verifying the code


because of the large code. So, we have a built-in tool, which helps us to test the
needless variables and functions. And, here we have the tool called Rational
purify.

Test the performance (Speed, response time) of the


program
The application could be slow for the following reasons:

o When logic is used.


o For the conditional cases, we will use or & and adequately.
o Switch case, which means we cannot use nested if, instead of using a switch
case.
9.Summarize the role of Oaths in white box testing and explain any two white box
testing design. (13)
10 Explain the various axioms that allow testers to evaluate Test Adequacy Criteria. (13)

11 (a) Outline the steps in constructing a control flow graph and computing Cyclomatic
complexity with an example. (6)

A Control Flow Graph (CFG) is the graphical representation of control flow or computation
during the execution of programs or applications. Control flow graphs are mostly used in static
analysis as well as compiler applications, as they can accurately represent the flow inside of a
program unit.

General Control Flow Graphs:


Control Flow Graph is represented differently for all statements and loops. Following images
describe it:

1. If-else:
2. while:

3. do-while:

4. for:
Cyclomatic complexity of a code section is the quantitative measure of the number of linearly
independent paths in it. It is a software metric used to indicate the complexity of a program. It
is computed using the Control Flow Graph of the program. The nodes in the graph indicate the
smallest group of commands of a program, and a directed edge in it connects the two nodes
i.e. if second command might immediately follow the first command.

For example, if source code contains no control flow statement then its cyclomatic complexity
will be 1 and source code contains a single path in it. Similarly, if the source code contains
one if condition then cyclomatic complexity will be 2 because there will be two paths one for
true and the other for false.

(b) Explain about state transition testing.(7)

State Transition Testing is a type of software testing which is performed to check the change in
the state of the application under varying input. The condition of input passed is changed and
the change in state is observed.

State Transition Testing is basically a black box testing technique that is carried out to observe
the behavior of the system or application for different input conditions passed in a sequence.
In this type of testing, both positive and negative input values are provided and the behavior
of the system is observed.

State Transition Testing is basically used where different system transitions are needed to be
tested.
Objectives of State Transition Testing:
The objective of State Transition testing is:

 To test the behavior of the system under varying input.


 To test the dependency on the values in the past.
 To test the change in transition state of the application.
 To test the performance of the system.

Transition States:

Change Mode:
When this mode is activated then the display mode moves from TIME to DATE.

Reset:
When the display mode is TIME or DATE, then reset mode sets them to ALTER TIME or ALTER
DATE respectively.

Time Set:
When this mode is activated, display mode changes from ALTER TIME to TIME.

Date Set:
When this mode is activated, display mode changes from ALTER DATE to DATE.

State Transition Diagram:


State Transition Diagram shows how the state of the system changes on certain inputs.
It has four main components:

 States
 Transition
 Events
 Actions

Advantages of State Transition Testing:


 State transition testing helps in understanding the behavior of the system.
 State transition testing gives the proper representation of the system behavior.
 State transition testing covers all the conditions.

Disadvantages of State Transition Testing:

 State transition testing can not be performed everywhere.


 State transition testing is not always reliable.

12 (a) Discuss in detail about code coverage testing. (6)

Code Coverage :
Code coverage is a software testing metric or also termed as a Code Coverage Testing which
helps in determining how much code of the source is tested which helps in accessing quality of
test suite and analyzing how comprehensively a software is verified. Actually in simple code
coverage refers to the degree of which the source code of the software code has been tested.
This Code Coverage is considered as one of the form of white box testing.

As we know at last of the development each client wants a quality software product as well as
the developer team is also responsible for delivering a quality software product to the
customer/client. Where this quality refers to the product’s performance, functionalities,
behavior, correctness, reliability, effectiveness, security, and maintainability. Where Code
Coverage metric helps in determining the performance and quality aspects of any software.

Code Coverage Criteria :


To perform code coverage analysis various criteria are taken into consideration. These are the
major methods/criteria which are considered.

1. Statement Coverage/Block coverage :


The number of statements that have been successfully executed in the program source code.

Statement Coverage = (Number of statements executed)/(Total Number of statements)*100.

2. Decision Coverage/Branch Coverage :


The number of decision control structures that have been successfully executed in the
program source code.

Decision Coverage = (Number of decision/branch outcomes exercised)/(Total number of


decision outcomes in the source code)*100.

3. Function coverage :
The number of functions that are called and executed at least once in the source code.

Function Coverage = (Number of functions called)/(Total number of function)*100.

4. Condition Coverage/Expression Coverage :


The number of Boolean condition/expression statements executed in the conditional
statement.

Condition Coverage =(Number of executed operands)/(Total Number of Operands)*100.

Advantages of Using Code Coverage :


 It helps in determining the performance and quality aspects of any software.
 It helps in evaluating quantitative measure of code coverage.
 It helps in easy maintenance of code base.
 It helps in accessing quality of test suite and analyzing how comprehensively a
software is verified.
 It helps in exposure of bad, dead, and unused code.
 It helps in creating extra test cases to increase coverage.
 It helps in developing the software product faster by increasing its productivity and
efficiency.
 It helps in measuring the efficiency of test implementation.
 It helps in finding new test cases which are uncovered.

Disadvantages of Using Code Coverage :

 Some times it fails to cover code completely and correctly.


 It can not guarantee that all possible values of a feature is tested with the help of code
coverage.
 It fails in ensuring how perfectly the code has been covered.

(b) Explain mutation testing with anexample.(7)

Mutation Testing is a type of software testing in which certain statements of the source code
are changed/mutated to check if the test cases are able to find errors in source code. The goal
of Mutation Testing is ensuring the quality of test cases in terms of robustness that it should
fail the mutated source code.
The changes made in the mutant program should be kept extremely small that it does not
affect the overall objective of the program. Mutation Testing is also called Fault-based testing
strategy as it involves creating a fault in the program and it is a type of White Box
Testing which is mainly used for Unit Testing.

Types of Mutation Testing


In Software Engineering, Mutation testing could be fundamentally categorized into 3 types–
statement mutation, decision mutation, and value mutation.

1. Statement Mutation – developer cut and pastes a part of a code of which the
outcome may be a removal of some lines
2. Value Mutation– values of primary parameters are modified
3. Decision Mutation– control statements are to be changed

Advantages of Mutation Testing:


Following are the advantages of Mutation Testing:

 It is a powerful approach to attain high coverage of the source program.


 This testing is capable comprehensively testing the mutant program.
 Mutation testing brings a good level of error detection to the software developer.
 This method uncovers ambiguities in the source code and has the capacity to detect all
the faults in the program.
 Customers are benefited from this testing by getting a most reliable and stable system.

Disadvantages of Mutation Testing:


On the other side, the following are the disadvantages of Mutant testing:

 Mutation testing is extremely costly and time-consuming since there are many mutant
programs that need to be generated.
 Since its time consuming, it’s fair to say that this testing cannot be done without an
automation tool.
 Each mutation will have the same number of test cases than that of the original
program. So, a large number of mutant programs may need to be tested against the
original test suite.
 As this method involves source code changes, it is not at all applicable for Black Box
Testing.

13 Explain the significance of Control flow graph & Cyclomatic complexity in white box
testing with a pseudo code for sum of positive numbers. Also mention the independent
paths with test cases.(13)

14 Discuss in detail about static testing and structural testing. Also write the difference
between these testing concepts.(13)

Structural testing is a type of software testing which uses the internal design of the software
for testing or in other words the software testing which is performed by the team which knows
the development phase of the software, is known as structural testing.

Structural testing is basically related to the internal design and implementation of the software
i.e. it involves the development team members in the testing team. It basically tests different
aspects of the software according to its types. Structural testing is just the opposite of
behavioral testing.

Types of Structural Testing:


There are 4 types of Structural Testing:
Advantages of Structural Testing:

 It provides thorough testing of the software.


 It helps in finding out defects at an early stage.
 It helps in elimination of dead code.
 It is not time consuming as it is mostly automated.

Disadvantages of Structural Testing:

 It requires knowledge of the code to perform test.


 It requires training in the tool used for testing.
 Sometimes it is expensive.

Static Testing is a type of a Software Testing method which is performed to check the defects
in software without actually executing the code of the software application. Whereas in
Dynamic Testing checks the code is executed to detect the defects.
Static testing is performed in early stage of development to avoid errors as it is easier to find
sources of failures and it can be fixed easily. The errors that can’t not be found using Dynamic
Testing, can be easily found by Static Testing.
Static Testing Techniques:
There are mainly two type techniques used in Static Testing:

1. Review:
In static testing review is a process or technique that is performed to find the potential defects
in the design of the software. It is process to detect and remove errors and defects in the
different supporting documents like software requirements specifications. People examine the
documents and sorted out errors, redundancies and ambiguities.
Review is of four types:
 Informal:
In informal review the creator of the documents put the contents in front of
audience and everyone gives their opinion and thus defects are identified in the
early stage.
 Walkthrough:
It is basically performed by experienced person or expert to check the defects so
that there might not be problem further in the development or testing phase.
 Peer review:
Peer review means checking documents of one-another to detect and fix the
defects. It is basically done in a team of colleagues.
 Inspection:
Inspection is basically the verification of document the higher authority like the
verification of software requirement specifications (SRS).
2. Static Analysis:
Static Analysis includes the evaluation of the code quality that is written by developers.
Different tools are used to do the analysis of the code and comparison of the same with the
standard.
It also helps in following identification of following defects:
(a) Unused variables
(b) Dead code
(c) Infinite loops
(d) Variable with undefined value
(e) Wrong syntax
Static Analysis is of three types:
 Data Flow:
Data flow is related to the stream processing.
 Control Flow:
Control flow is basically how the statements or instructions are executed.
 Cyclomatic Complexity:
Cyclomatic complexity is the measurement of the complexity of the program that
is basically related to the number of independent paths in the control flow graph
of the program.
PART-C

1.Demonstrate the various black box test cases using Equivalence class partitioning and
boundary values analysis to test a module for payroll System. (15)

2.Explain how the covering code logic and paths are used in the role of white box design with
suitable example. (15)

White Box Testing:

 White box testing is also known as glass box testing, structural testing, clear box
testing, open box testing and transparent box testing.
 It tests internal coding and infrastructure of a software focus on checking of
predefined inputs against expected and desired outputs. It is based on inner workings
of an application and revolves around internal structure testing.
 In this type of testing programming skills are required to design test cases. The
primary goal of white box testing is to focus on the flow of inputs and outputs through
the software and strengthening the security of the software.

Code Coverage:
 This is an important unit testing metric.
 In simple terms, the extent to which the source code of a software program or an
application will get executed during testing is what is termed as Code Coverage.
 If the tests execute the entire piece of code including all branches, conditions, or loops,
then we would say that there is complete coverage of all the possible scenarios and
thus the Code Coverage is 100%. To understand this even better, let’s take up an
example.

Given below is a simple code that is used to add two numbers and display the result
depending on the value of the result.

Input a, b
Let c = a + b

If c < 10, print c

Else, print ‘Sorry’

 The above program takes in two inputs i.e. ‘a’ & ‘b’. The sum of both is stored in
variable c. If the value of c is less than 10, then the value of ‘c’ is printed else ‘Sorry’ is
printed.
 Now, if we have some tests to validate the above program with the values of a & b such
that the sum is always less than 10, then the else part of the code never gets executed.
In such a scenario, we would say that the coverage is not complete.

Various reasons make Code Coverage essential and some of those are
listed below:
 It helps to ascertain that the software has lesser bugs when compared to the software
that does not have a good Code Coverage.
 By aiding in improving the code quality, it indirectly helps in delivering a better ‘quality’
software.
 It is a measure that can be used to know the test effectiveness (effectiveness of the unit
tests that are written to test the code).
 Helps to identify those parts of the source code that would go untested.
 It helps to determine if the current testing (unit testing) is sufficient or not and if some
more tests are needed in place as well.

Path testing:
In the path testing, we will write the flow graphs and test all independent paths.

And test all the independent paths implies that suppose a path from main() to function G, first
set the parameters and test if the program is correct in that particular path, and in the same
way test all other paths and fix the bugs.
Here we will take a simple example, to get a better idea what is basis path testing include
In the above example, we can see there are few conditional statements that is executed
depending on what condition it suffice. Here there are 3 paths or condition that need to be
tested to get the output,

Path 1: 1,2,3,5,6, 7

Path 2: 1,2,4,5,6, 7

Path 3: 1, 6, 7

Steps for Basis Path testing:

The basic steps involved in basis path testing include

 Draw a control graph (to determine different program paths)


 Calculate Cyclomatic complexity (metrics to determine the number of independent
paths)
 Find a basis set of paths
 Generate test cases to exercise each path

Advantages of Basic Path Testing

 It helps to reduce the redundant tests


 It focuses attention on program logic
 It helps facilitates analytical versus arbitrary case design
 Test cases which exercise basis set will execute every statement in a program at least
once
 Basis path testing helps to determine all faults lying within a piece of code.

3.Demonstrate the various black box test cases using Equivalence class partitioning and
boundary values analysis to test a module for ATM system. (15)

Boundary Testing:
Boundary testing is the process of testing between extreme ends or boundaries
between partitions of the input values.
 So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just
Inside-Just Outside values are called boundary values and the testing is called
“boundary testing”.
 The basic idea in normal boundary value testing is to select input variable
values at their:

1. Minimum
2. Just above the minimum
3. A nominal value
4. Just below the maximum
5. Maximum

In Boundary Testing, Equivalence Class Partitioning plays a good role.

Boundary Testing comes after the Equivalence Class Partitioning.

Example:
 Imagine, there is a function that accepts a number between 18 to 30, where 18
is the minimum and 30 is the maximum value of valid partition, the other values of this
partition are 19, 20, 21, 22, 23, 24, 25, 26, 27, 28 and 29. The invalid partition consists
of the numbers which are less than 18 such as 12, 14, 15, 16 and 17, and more than 30
such as 31, 32, 34, 36 and 40. Tester develops test cases for both valid and invalid
partitions to capture the behavior of the system on different input conditions.

 The software system will be passed in the test if it accepts a valid number and gives
the desired output, if it is not, then it is unsuccessful. In another scenario, the software
system should not accept invalid numbers, and if the entered number is invalid, then it
should display error massage.
 If the software which is under test, follows all the testing guidelines and specifications
then it is sent to the releasing team otherwise to the development team to fix the
defects.

Equivalence Partitioning

Equivalence Partitioning or Equivalence Class Partitioning is type of black box


testing technique which can be applied to all levels of software testing like unit, integration,
system, etc. In this technique, input data units are divided into equivalent partitions that can
be used to derive test cases which reduces time required for testing because of small number
of test cases.

 It divides the input data of software into different equivalence data classes.
 You can apply this technique, where there is a range in the input field.
 ATM Simulation System z Specification (simplified) y The customer will be required to
enter the account number and a PIN. x There is no need of an ATM card. y The ATM
must provide the following transactions to the customer: x Cash Withdrawals,
Deposits, Tranfers and Balance Inquiries. x Only one transaction will be allowed in each
session. y The ATM will communicate the transaction to the bank and obtain
verification that it was allowed by the bank. x If the bank determines that account
number or PIN is invalid, the transaction is canceled. y The ATM will have an operator
panel that will allow an operator to start and stop the servicing of customers. x When
the machine is shut down, the operator may remove deposit envelopes and reload the
machine with cash. x The operator will be required to enter the total cash on hand
before starting the system from this panel.

4.Explain the basis path testing. State the principles of control flow graph and cyclomatic
complexity. What are the formulas used in cyclomatic complexity? (15)

Basis Path Testing :


Basis Path Testing is a white-box testing technique based on a program's or
module's control structure. A control flow graph is created using this structure, and the
many possible paths in the graph are tested using this structure.
The approach of identifying pathways in the control flow graph that give a foundation
set of execution paths through the program or module is known as basis path testing.
Because this testing is dependent on the program's control structure, it necessitates a
thorough understanding of the program's structure. Four stages are followed to create
test cases using this technique −
 Create a Control Flow Graph.
 Calculate the Graph's Cyclomatic Complexity
 Identify the Paths That Aren't Connected
 Create test cases based on independent paths.
Cyclomatic Complexity:
Cyclomatic complexity is a source code complexity measurement that is
being correlated to a number of coding errors. It is calculated by developing a Control
Flow Graph of the code that measures the number of linearly-independent paths
through a program module.
Lower the Program's cyclomatic complexity, lower the risk to modify and easier to
understand. It can be represented using the below formula:
Cyclomatic complexity = E - N + 2*P
where,
E = number of edges in the flow graph.
N = number of nodes in the flow graph.
P = number of nodes that have exit points
Control flow diagram:

 The control flow graph is a graphical representation of a program's control


structure.

 It uses the elements named process blocks, decisions, and junctions.

 The flow graph is similar to the earlier flowchart, with which it is not to be
confused.

 Flow Graph Elements:A flow graph contains four different types of elements.

(1) Process Block (2) Decisions (3) Junctions (4) Case Statements

There are three methods of computing Cyclomatic complexities.

Method 1: Total number of regions in the flow graph is a Cyclomatic complexity.

Method 2: The Cyclomatic complexity, V (G) for a flow graph G can be defined as

V (G) = E - N + 2

Where: E is total number of edges in the flow graph. N is the total number of nodes in
the flow graph.

Method 3: The Cyclomatic complexity V (G) for a flow graph G can be defined as

V (G) = P + 1

Where: P is the total number of predicate nodes contained in the flow G

Example
 Consider the code snippet below, for which we will conduct basis path testing:
 int num1 = 6;
 int num2 = 9;
 if(num2 == 0){
 cout<<"num1/num2 is undefined"<<endl;
 }else{
 if(num1 > num2){
 cout<<"num1 is greater"<<endl;
 }else{
 cout<<"num2 is greater"<<endl;
 }
 }

Step 1: Draw the control flow graph

The control flow graph of the code above will be as follows

Step 2: Calculate cyclomatic complexity

The cyclomatic complexity of the control flow graph above will be:

where,

 E = The number of edges in the control flow graph.


 N = The number of nodes in the control flow graph.
 P = The number of connected components in the control flow graph.

Step 3: Identify independent paths

 Path 1: 1A-2B-3C-4D-5F-9
 Path 2: 1A-2B-3C-4E-6G-7I-9
 Path 3: 1A-2B-3C-4E-6H-8J-9

Step 4: Design test cases

The test cases to execute all paths above will be as follows:


PART – A
1. Give the most effective ad hoc testing techniques.
Ad hoc testing technique is the free form testing technique which does not require the
pre-planned documentation and test cases. This is an informal and creative software testing
technique which requires the prior knowledge about the system functionality.
2. What is security testing? Give some examples.
Ad hoc testing technique is the free form testing technique which does not require the
pre-planned documentation and test cases. This is an informal and creative software testing
technique which requires the prior knowledge about the system functionality.
3. Show the approaches you use to do website testing.
Web application testing, a software testing technique exclusively adopted to test the
applications that are hosted on web in which the application interfaces and other functionalities
are tested.
4. Can you judge on the reason for system testing?
System testing is directly associated with the System design phase. System tests check
the entire system functionality and the communication of the system under development with
external systems. Most of the software and hardware compatibility issues can be uncovered
during system test execution.
5. List out the objectives of configuration testing.
• Show that all the configuration changing commands and menus work properly
• Show that all interchangeable devices are really interchangeable, and that they each
enter the proper states for the specified conditions.
• Show that the systems’ performance level is maintained when devices are interchanged,
or when they fail.
6. Analyze on when to do the regression testing and smoke testing?
Regression testing is not a level of testing, but it is there testing of software that occurs
when changes are made to ensure that the new version of the software has retained the
capabilities of the old version and that no new defects have been introduced due to the changes.
SMOKE TESTING, also known as “Build Verification Testing”, is a type of software
testing that comprises of a non-exhaustive set of tests that aim at ensuring that the most
important functions work. The result of this testing is used to decide if a build is stable enough to
proceed with further testing.
7. Compare functional Testing from non-functional Testing.
Functional tests are black box in nature. The focus is on the inputs and proper outputs
for each
function. Improper and illegal inputs must also be handled by the system. System behavior under
the latter circumstances tests must be observed. All functions must be tested.
Compatibility Testing is a type of Software testing to check whether your software is
capable of running on different hardware, operating systems, applications , network
environments or Mobile devices. Compatibility Testing is a type of the Non-functional testing.
8. Define unit Test. Give example.
Unit testing is a software development process in which the smallest testable parts of an
application, called units, are individually and in- dependently scrutinized for proper operation.
Unit testing can be done manually but is often automated. For example you are testing a function;
whether loop or statement in a program is working properly or not than this is called as unit
testing.
9. Show the test cases applied for acceptance testing.
During acceptance test the development organization must show that the software meets
all of the client’s requirements. Very often final payments for system development depend on the
quality of the software as observed during the acceptance test.
10. List out the types of system Testing.
1. Performance Testing
2. Load Testing
3. Stress Testing
4. Scalability Testing.
11. List the element of the engineering disciplines.
Electrical Engineering
Mechanical Engineering.
Civil Engineering
Chemical Engineering.
Biomedical Engineering
12. Compare the process of testing and debugging.

Testing Debugging

Testing is the process to find bugs and Debugging is the process to correct the
errors. bugs found during testing.

It is the process to identify the failure It is the process to give the absolution to
of implemented code. code failure.

Testing is the display of errors. Debugging is a deductive process.

Debugging is done by either programmer


Testing is done by the tester. or developer.

There is no need of design knowledge Debugging can’t be done without proper


in the testing process. design knowledge.

Testing can be done by insider as well Debugging is done only by insider.


as outsider. Outsider can’t do debugging.

Debugging is always manual. Debugging


Testing can be manual or automated. can’t be automated.

It is based on different testing levels


i.e. unit testing, integration testing, Debugging is based on different types of
system testing etc. bugs.

Debugging is not an aspect of software


Testing is a stage of software development life cycle, it occurs as a
development life cycle (SDLC). consequence of testing.

13. What is meant by feature defects?


Think of a defect as a deviation from expected software behavior. In other words, if a
website or app is functioning differently from what users would expect from it, that particular
variation would be considered a defect. In software testing circles, the term defect is often used
interchangeably with a bug.

14. Why test cases should be developed for both valid and invalid inputs?
Test cases should be developed for both valid and invalid input conditions. Use of test
cases that are based on invalid inputs is very useful for revealing defects since they may
exercise the code in unexpected ways and identify unexpected software behavior.
Test cases should be developed for both valid and in valid input conditions. Principle 6.
The probability of the existence of additional defects in a software component is
proportional to the number of defects already detected in that component.

15. Mention the role of test engineer in software development organization


Software test engineers are responsible for designing and implementing test
procedures to ensure that software programs work as intended. They are mostly hired by
software development companies to ensure that products perform to specifications before being
released to the public.

16. How would formulate the cost of defect?


The cost of defects can be measured by the impact of the defects and when we find
them. Earlier the defect is found lesser is the cost of defect. For example if error is found in the
requirement specifications during requirements gathering and analysis, then it is somewhat cheap
to fix it

17. Explain some of the quality metric attributes


Product metrics − Describes the characteristics of the product such as size, complexity,
design features, performance, and quality level.
Process metrics − These characteristics can be used to improve the development and
maintenance activities of the software.

18. What is a defect? Give example?


The definition of a defect is an imperfection or lacking that causes the person or thing
with the defect to fall short of perfection. An example of a defect is a genetic condition that
causes weakness or death. An example of a defect is faulty wiring that results in a product not
working.

19. Summarize the major components in software development process.


Software Development : In this process, designing, programming, documenting, testing,
and bug fixing is done. Components of Software : There are three components of the software:
These are : Program, Documentation, and Operating Procedures. A computer program is a
list of instructions that tell a computer what to do.

20. Error Vs Defect Vs Failure. Discuss


Testing is the process of identifying defects, where a defect is any variance between
actual and expected results. “A mistake in coding is called Error, error found by tester is
called Defect, defect accepted by development team then it is called Bug, build does not meet
the requirements then it Is Failure.
PART – B

1. Explain the different integration testing strategies for procedures and


functions with suitable diagrams.
2. How would you identify the hardware and software for configuration testing and
Explain what testing techniques applied for website testing?
3. State unit test and describe about planning and designing of unit test.
4. Explain the various units in a program considered for unit testing.
REFER 3RD QUESTION.

5. Differentiate alpha testing from beta testing and discuss in detail about the phases in
which alpha and beta testing is done, In what way it is related to milestone and deliverable.
6. Summarize the issues that arise in class testing and explain about compatibility and
documentation testing.

User compatibility and documentation testing:

7. Determine and prepare the test cases for acceptance, usability and accessibility testing.
Acceptance Testing is a method of software testing where a system is tested for
acceptability. The major aim of this test is to evaluate the compliance of the system with the
business requirements and assess whether it is acceptable for delivery or not.
Usability Testing is a testing technique used to evaluate how easily the user can use the software.

In simple words, it checks the user-friendliness of the software. It is also called UX testing (user
experience testing) because it observes the experience a user has while interacting with a
software.

Usually, the customer performs these usability testing and the organization which creates these
software collects feedback and metrics from these tests and makes changes in the software
application.

Usability Testing Test Cases


Some sample test cases for usability testing for a website :
8(a). Describe in detail about the internationalization testing and its designing and planning
Internationalization testing is the process of verifying the application under test to work
uniformly across multiple regions and cultures.
The main purpose of internationalization is to check if the code can handle all international
support without breaking functionality that might cause data loss or data integrity issues.
Globalization testing verifies if there is proper functionality of the product with any of the
locale settings.
Internationalization (i18n) Testing at Front end:
To perform internationalization testing, the testers have to focus on -Language, Culture, and
Region, Dates and Important events.
 First testing should be done on a user interface such as alignment of texts, menus, buttons,
dialog boxes, images, toolbar, prompt and alert messages.
 Second testing on content localization and feature should be performed on language
specific properties files and on the region where the particular feature is enabled or disabled.
 Third locale/culture awareness testing should be performed on dates & number formats
such as currencies, calendars, time, telephone number, zip code, etc.
 Finally testing on file transfer and rendering should be performed to make sure whether
the file transfers are successful and to check whether the scripts supported to the website are
correctly displayed.
The above steps, at a high level, covers front end testing.
Internationalization (i18n) Testing at Back end:
This process involves enabling the back end of a website to handle character encoding, form data
submission, different languages and currencies and site search.
Testing at the back end requires an understanding of Content Management System (CMS) that is
used to store, author and publish content on the websites. Thorough understanding of the
database is highly important for testing at the back end.
Recent version of databases and Content Management System are already internationalized.

Internationalization typically entails:


1. Designing and developing the application such that it simplifies the deployment of
localization and internationalization of the application. This includes taking care of
proper rendering of characters in various languages, string concatenation etc. which can
be done by using Unicode during development
2. Taking care of the big picture while developing the application in order to support
bidirectional text or for identifying languages we need to add markup in out DTD. Also,
we use CSS, to support vertical text or other non-Latin typographic features.
3. The code should be able to support local and regional language and also other cultural
preferences. This involves using predefined localization data and features from existing
libraries. Date time formats, local calendar holidays, numeric formats, data presentation,
sorting, data alignment, name and address displaying format etc.
4. Making localizable elements separate from the source code so that code is independent.
And then as per the user’s requirement, localized content can be loaded based on their
preferences.
Advantages of Globalization Testing

Following are the benefits of globalization testing:

o This testing will provide the free edition of software applications or other content to
multiple locations.
o This testing ensures that the application could be used in various languages without need
of rewriting the entire software code.
o It will increase the code design and quality of the product.
o It will enhance the customer base around the world.
o This testing will help us to decrease the cost and time for localization testing.
o This testing will provide us more scalability and flexibility.

Disadvantages of Globalization Testing

The disadvantages of globalization testing are as follows:

o The test engineer might face the schedule challenge.


o We need the domain expert to perform globalization testing,
o We need to hire a local translator, which makes this procedure costlier.

(b) Present an outline of testing object oriented systems


Once a program code is written, it must be tested to detect and subsequently handle all errors in
it. A number of schemes are used for testing purposes.
Another important aspect is the fitness of purpose of a program that ascertains whether the
program serves the purpose which it aims for. The fitness defines the software quality.

Testing Object-Oriented Systems

Testing is a continuous activity during software development. In object-oriented systems,


testing encompasses three levels, namely, unit testing, subsystem testing, and system testing.
Unit Testing
In unit testing, the individual classes are tested. It is seen whether the class attributes are
implemented as per design and whether the methods and the interfaces are error-free. Unit
testing is the responsibility of the application engineer who implements the structure.
Subsystem Testing
This involves testing a particular module or a subsystem and is the responsibility of the
subsystem lead. It involves testing the associations within the subsystem as well as the
interaction of the subsystem with the outside. Subsystem tests can be used as regression tests for
each newly released version of the subsystem.
System Testing
System testing involves testing the system as a whole and is the responsibility of the quality-
assurance team. The team often uses system tests as regression tests when assembling new
releases.

Object-Oriented Testing Techniques

Grey Box Testing


The different types of test cases that can be designed for testing object-oriented programs are
called grey box test cases. Some of the important types of grey box testing are −
 State model based testing − This encompasses state coverage, state transition coverage,
and state transition path coverage.
 Use case based testing − Each scenario in each use case is tested.
 Class diagram based testing − Each class, derived class, associations, and aggregations
are tested.
 Sequence diagram based testing − The methods in the messages in the sequence
diagrams are tested.
Techniques for Subsystem Testing
The two main approaches of subsystem testing are −
 Thread based testing − All classes that are needed to realize a single use case in a
subsystem are integrated and tested.
 Use based testing − The interfaces and services of the modules at each level of
hierarchy are tested. Testing starts from the individual classes to the small modules
comprising of classes, gradually to larger modules, and finally all the major subsystems.
Categories of System Testing
 Alpha testing − This is carried out by the testing team within the organization that
develops software.
 Beta testing − This is carried out by select group of co-operating customers.
 Acceptance testing − This is carried out by the customer before accepting the
deliverables
9. Discuss the need for various levels of testing.
Tests are grouped together based on where they are added in SDLC or the by the level of
detailing they contain. In general, there are four levels of testing: unit testing, integration testing,
system testing, and acceptance testing. The purpose of Levels of testing is to make software
testing systematic and easily identify all possible test cases at a particular level.

There are many different testing levels which help to check behavior and performance for
software testing. These testing levels are designed to recognize missing areas and reconciliation
between the development lifecycle states. In SDLC models there are characterized phases such
as requirement gathering, analysis, design, coding or execution, testing, and deployment. All
these phases go through the process of software testing levels.

Levels of Testing

There are mainly four Levels of Testing in software testing :

1. Unit Testing : checks if software components are fulfilling functionalities or not.


2. Integration Testing : checks the data flow from one module to other modules.
3. System Testing : evaluates both functional and non-functional needs for the testing.
4. Acceptance Testing : checks the requirements of a specification or contract are met as
per its delivery.

Each of these testing levels has a specific purpose. These testing level provide value to the
software development lifecycle.

1) Unit testing:

A Unit is a smallest testable portion of system or application which can be compiled, liked,
loaded, and executed. This kind of testing helps to test each module separately.

The aim is to test each part of the software by separating it. It checks that component are
fulfilling functionalities or not. This kind of testing is performed by developers.

2) Integration testing:

Integration means combining. For Example, In this testing phase, different software modules are
combined and tested as a group to make sure that integrated system is ready for system testing.

Integrating testing checks the data flow from one module to other modules. This kind of testing
is performed by testers.

3) System testing:
System testing is performed on a complete, integrated system. It allows checking system’s
compliance as per the requirements. It tests the overall interaction of components. It involves
load, performance, reliability and security testing.

System testing most often the final test to verify that the system meets the specification. It
evaluates both functional and non-functional need for the testing.

4) Acceptance testing:

Acceptance testing is a test conducted to find if the requirements of a specification or contract


are met as per its delivery. Acceptance testing is basically done by the user or customer.
However, other stockholders can be involved in this process.

Other Types of Testing:

 Regression Testing
 Buddy Testing
 Alpha Testing
 Beta Testing

10. How would you classify integration testing and system testing?
System Testing:
While developing a software or application product, it is tested at the final stage as a whole by
combining all the product modules and this is called as System Testing. The primary aim of
conducting this test is that it must fulfill the customer/user requirement specification. It is also
called as an end-to-end test, as is performed at the end of the development. This testing does
not depend on system implementation; in simple words, the system tester doesn’t know which
technique among procedural and object-oriented is implemented.
This testing is classified into functional and non-functional requirements of the system. In
functional testing, the testing is similar to black-box testing which is based on specifications
instead of code and syntax of the programming language used. On the other hand, in non-
functional testing, it checks for performance and reliability through generating test cases in the
corresponding programming language.
Integration Testing:
This testing is the collection of the modules of the software, where the relationship and the
interfaces between the different components are also tested. It needs coordination between the
project level activities of integrating the constituent components together at a time.
The integration and integration testing must adhere to a building plan for the defined
integration and identification of the bug in the early stages. However, an integrator or
integration tester must have the programming knowledge, unlike system tester.
Difference between System Testing and Integration Testing :

S.No.Comparison System Testing Integration Testing

Validates the collection and


1. Basic Tests the finished product. interface modules.

2. Performed After integration testing After unit testing

Understanding of the internal


structure and programming Knowledge of just interlinked
3. Requires language. modules and their interaction.

On the behavior of all module as System functionalities interface


4. Emphasis a whole. between individual modules.

Functional as well as non-


5. Covers functional tests. Only functional testing.

Created to imitate real life Build to simulate the interaction


6. Test cases scenarios. between two modules.

Sanity, regression, usability,


big-bang, incremental and retesting, maintenance and
7. Approaches functional. performance tests.

By test engineers as well as


8. Executed Only by test engineers. developers.

11. Describe in detail about scenario testing and performance testing.

Scenario Testing

Scenario testing is a software testing technique that makes best use of scenarios. Scenarios help
a complex system to test better where in the scenarios are to be credible which are easy to
evaluate.

Methods in Scenario Testing:

 System scenarios
 Use-case and role-based scenarios

Strategies to Create Good Scenarios:

 Enumerate possible users their actions and objectives


 Evaluate users with hacker's mindset and list possible scenarios of system abuse.
 List the system events and how does the system handle such requests.
 List benefits and create end-to-end tasks to check them.
 Read about similar systems and their behaviour.
 Studying complaints about competitor's products and their predecessor.

Scenario Testing Risks:

 When the product is unstable, scenario testing becomes complicated.


 Scenario testing are not designed for test coverage.
 Scenario tests are often heavily documented and used time and again

Performance Testing

Performance testing, a non-functional testing technique performed to determine the system


parameters in terms of responsiveness and stability under various workload. Performance
testing measures the quality attributes of the system, such as scalability, reliability and resource
usage.

Performance Testing Techniques:

 Load testing - It is the simplest form of testing conducted to understand the behaviour of
the system under a specific load. Load testing will result in measuring important
business critical transactions and load on the database, application server, etc., are also
monitored.
 Stress testing - It is performed to find the upper limit capacity of the system and also to
determine how the system performs if the current load goes well above the expected
maximum.
 Soak testing - Soak Testing also known as endurance testing, is performed to determine
the system parameters under continuous expected load. During soak tests the parameters
such as memory utilization is monitored to detect memory leaks or other performance
issues. The main aim is to discover the system's performance under sustained use.
 Spike testing - Spike testing is performed by increasing the number of users suddenly by
a very large amount and measuring the performance of the system. The main aim is to
determine whether the system will be able to sustain the workload.
Performance Testing Process:

Attributes of Performance Testing:

 Speed
 Scalability
 Stability
 Reliability

12 (a) Why is it so important to design a test harness for reusability and show the approach
you used for running the unit test and recording the results?
Test Harness, also known as automated test framework mostly used by developers. A test
harness provides stubs and drivers, which will be used to replicate the missing items, which are
small programs that interact with the software under test.

Test Harness Features:

 To execute a set of tests within the framework or using the test harness
 To key in inputs to the application under test
 Provide a flexibility and support for debugging
 To capture outputs generated by the software under test
 To record the test results(pass/fail) for each one of the tests
 Helps the developers to measure code coverage at code level.
Test Harness Benefits:

 Increased productivity as automation is in place.


 Improved quality of software as automation helps us to be efficient.
 Provides Tests that can be scheduled.
 Can handle complex conditions that testers are finding it difficult to simulate.
b) Tabulate the key difference in integrating procedural oriented system as compared to
object oriented systems.
Procedural Oriented Programming Object Oriented Programming

In procedural programming, In object oriented programming,


program is divided into small parts program is divided into small
called functions. parts called objects.

Procedural programming Object oriented programming


follows top down approach. follows bottom up approach.

Object oriented programming


There is no access specifier in have access specifiers like private,
procedural programming. public, protected etc.

Adding new data and function is not Adding new data and function is
easy. easy.

Procedural programming does not Object oriented programming


have any proper way for hiding data provides data hiding so it is more
so it is less secure. secure.

In procedural programming, Overloading is possible in object


overloading is not possible. oriented programming.

In procedural programming, In object oriented programming,


function is more important than data is more important than
data. function.

Procedural programming is based Object oriented programming is


on unreal world. based on real world.
Procedural Oriented Programming Object Oriented Programming

Examples: C, FORTRAN, Pascal, Examples: C++, Java, Python, C#


Basic etc. etc.

13 (a) Describe “The Class as a Testable Unit” in detail.


Unit testing, a testing technique using which individual modules are tested to determine if there
are any issues by the developer himself. It is concerned with functional correctness of the
standalone modules.
The main aim is to isolate each unit of the system to identify, analyze and fix the defects.

Unit Testing - Advantages:

 Reduces Defects in the Newly developed features or reduces bugs when changing the
existing functionality.
 Reduces Cost of Testing as defects are captured in very early phase.
 Improves design and allows better refactoring of code.
 Unit Tests, when integrated with build gives the quality of the build as well.

Unit Testing LifeCyle:


Unit Testing Techniques:

 Black Box Testing - Using which the user interface, input and output are tested.
 White Box Testing - used to test each one of those functions behaviour is tested.
 Gray Box Testing - Used to execute tests, risks and assessment methods.

(b)Explain the planning, design and execution of unit tests.

14 (a) Explain about the various types of System Testing and its importance with example.
System Testing is a type of software testing that is performed on a complete integrated system
to evaluate the compliance of the system with the corresponding requirements.
In system testing, integration testing passed components are taken as input. The goal of
integration testing is to detect any irregularity between the units that are integrated together.
System testing detects defects within both the integrated units and the whole system. The result
of system testing is the observed behavior of a component or a system when it is tested.
System Testing is carried out on the whole system in the context of either system requirement
specifications or functional requirement specifications or in the context of both. System testing
tests the design and behavior of the system and also the expectations of the customer. It is
performed to test the system beyond the bounds mentioned in the software requirements
specification (SRS).
System Testing is basically performed by a testing team that is independent of the development
team that helps to test the quality of the system impartial. It has both functional and non-
functional testing.
System Testing is a black-box testing.
System Testing is performed after the integration testing and before the acceptance testing.
System Testing Process:
System Testing is performed in the following steps:
 Test Environment Setup:
Create testing environment for the better quality testing.
 Create Test Case:
Generate test case for the testing process.
 Create Test Data:
Generate the data that is to be tested.
 Execute Test Case:
After the generation of the test case and the test data, test cases are executed.
 Defect Reporting:
Defects in the system are detected.
 Regression Testing:
It is carried out to test the side effects of the testing process.
 Log Defects:
Defects are fixed in this step.
 Retest:
If the test is not successful then again test is performed.

Types of System Testing:


 Performance Testing:
Performance Testing is a type of software testing that is carried out to test the speed,
scalability, stability and reliability of the software product or application.
 Load Testing:
Load Testing is a type of software Testing which is carried out to determine the behavior of
a system or software product under extreme load.
 Stress Testing:
Stress Testing is a type of software testing performed to check the robustness of the system
under the varying loads.
 Scalability Testing:
Scalability Testing is a type of software testing which is carried out to check the
performance of a software application or system in terms of its capability to scale up or
scale down the number of user request load.

(b) What is regression testing? Outline the issues to be addressed for developing test cases
to perform regression testing.
Regression Testing is defined as a type of software testing to confirm that a recent program or
code change has not adversely affected existing features. Regression Testing is nothing but a full
or partial selection of already executed test cases which are re-executed to ensure existing
functionalities work fine.
This testing is done to make sure that new code changes should not have side effects on the
existing functionalities. It ensures that the old code still works once the latest code changes are
done.

Need of Regression Testing

The Need of Regression Testing mainly arises whenever there is requirement to change the
code and we need to test whether the modified code affects the other part of software application
or not. Moreover, regression testing is needed, when a new feature is added to the software
application and for defect fixing as well as performance issue fixing.

How to do Regression Testing

In order to do Regression Testing process, we need to first debug the code to identify the bugs.
Once the bugs are identified, required changes are made to fix it, then the regression testing is
done by selecting relevant test cases from the test suite that covers both modified and affected
parts of the code.
Software maintenance is an activity which includes enhancements, error corrections,
optimization and deletion of existing features. These modifications may cause the system to
work incorrectly. Therefore, Regression Testing becomes necessary. Regression Testing can be
carried out using the following techniques:
Retest All

 This is one of the methods for Regression Testing in which all the tests in the existing test
bucket or suite should be re-executed. This is very expensive as it requires huge time and
resources.

Regression Test Selection

Regression Test Selection is a technique in which some selected test cases from test suite are
executed to test whether the modified code affects the software application or not. Test cases are
categorized into two parts, reusable test cases which can be used in further regression cycles and
obsolete test cases which can not be used in succeeding cycles.
Prioritization of Test Cases

 Prioritize the test cases depending on business impact, critical & frequently used
functionalities. Selection of test cases based on priority will greatly reduce the regression
test suite.

Selecting test cases for regression testing

It was found from industry data that a good number of the defects reported by customers were
due to last minute bug fixes creating side effects and hence selecting the Test Case for regression
testing is an art and not that easy. Effective Regression Tests can be done by selecting the
following test cases –

 Test cases which have frequent defects


 Functionalities which are more visible to the users
 Test cases which verify core features of the product
 Test cases of Functionalities which has undergone more and recent changes
 All Integration Test Cases
 All Complex Test Cases
 Boundary value test cases
 A sample of Successful test cases
 A sample of Failure test cases

Challenges in Regression Testing:

Following are the major testing problems for doing regression testing:

 With successive regression runs, test suites become fairly large. Due to time and budget
constraints, the entire regression test suite cannot be executed
 Minimizing the test suite while achieving maximum Test coverage remains a challenge
 Determination of frequency of Regression Tests, i.e., after every modification or every
build update or after a bunch of bug fixes, is a challenge.
PART – C

1. (a) Write the importance of security testing and explain the consequences of security
breaches, also write the various areas which have to be focused on during security testing.

(b) State the need for integration testing in procedural code.

Integration tests:

Integration test for procedural code has two major goals:

(i) to detect defects that occur on the interfaces of units;

(ii) to assemble the individual units into working subsystems and finally a
complete system that is ready for system test.
2. Case Study: Several kinds of tests for a web application. Abstract: A UK based company
entrusted us to test this project. It’s a web application for government to collect data and
calculate them to prioritize all thetasks. Description: This client is from Hertfordshirts in UK, the
project is an application for the government. In fact it includes two parts: web site for data
collection and presentation purpose, in parallel a windows application for administration purpose.
Here the task is ensuring the quality of the web application, includes many aspects, such as
function correctness performance acceptance, UI appropriateness and so on. Moreover, for
testing function, we had to use the windows application to edit user’s services and other data.
The client only gave us the software requirement specification and the applications tested, there
was not any test plan, test strategy, test cases, even test termination criterion. On the one hand,
we had to spend much time in communicating with client to make clearly about some important
points; on the other hand we had to get familiar with the application via operating it and reading
requirements. Then, how to improve the efficiency of regression test?
3 (a) What is security testing? Explain its importance.

Security testing is an integral part of software testing, which is used to discover the weaknesses,
risks, or threats in the software application and also help us to stop the nasty attack from the
outsiders and make sure the security of our software applications.

The primary objective of security testing is to find all the potential ambiguities and
vulnerabilities of the application so that the software does not stop working. If we perform
security testing, then it helps us to identify all the possible security threats and also help the
programmer to fix those errors.

It is a testing procedure, which is used to define that the data will be safe and also continue the
working process of the software.

Principle of Security testing

Here, we will discuss the following aspects of security testing: Exception Handling in Java -

o Availability
o Integrity
o Authorization
o Confidentiality
o Authentication
o Non-repudiation

Types of Security testing

As per Open Source Security Testing techniques, we have different types of security testing
which as follows:

o Security Scanning
o Risk Assessment
o Vulnerability Scanning
o Penetration testing
o Security Auditing
o Ethical hacking
o Posture Assessment

Security Scanning

Security scanning can be done for both automation testing

and manual testing


. This scanning will be used to find the vulnerability or unwanted file modification in a web-
based application, websites, network, or the file system. After that, it will deliver the results
which help us to decrease those threats. Security scanning is needed for those systems, which
depends on the structure they use.

Risk Assessment

To moderate the risk of an application, we will go for risk assessment. In this, we will explore
the security risk, which can be detected in the association. The risk can be further divided into
three parts, and those are high, medium, and low. The primary purpose of the risk assessment
process is to assess the vulnerabilities and control the significant threat.

Vulnerability Scanning

It is an application that is used to determine and generates a list of all the systems which contain
the desktops, servers, laptops, virtual machines, printers, switches, and firewalls related to a
network. The vulnerability scanning can be performed over the automated application and also
identifies those software and systems which have acknowledged the security vulnerabilities.
Penetration testing

Penetration testing is a security implementation where a cyber-security

professional tries to identify and exploit the weakness in the computer system. The primary
objective of this testing is to simulate outbreaks and also finds the loophole in the system and
similarly save from the intruders who can take the benefits.

Security Auditing

Security auditing is a structured method for evaluating the security measures of the organization.
In this, we will do the inside review of the application and the control system

for the security faults.

Ethical hacking

Ethical hacking

is used to discover the weakness in the system and also helps the organization to fix those
security loopholes before the nasty hacker exposes them. The ethical hacking will help us to
increase the security position of the association because sometimes the ethical hackers use the
same tricks, tools, and techniques that nasty hackers will use, but with the approval of the official
person.

The objective of ethical hacking is to enhance security and to protect the systems from malicious
users' attacks.

Posture Assessment

It is a combination of ethical hacking, risk assessments, and security scanning, which helps us
to display the complete security posture of an organization.

Why security testing is essential for web applications

At present, web applications are growing day by day, and most of the web application is at risk.
Here we are going to discuss some common weaknesses of the web application.

o Client-side attacks
o Authentication
o Authorization
o Command execution
o Logical attacks
o Information disclosure

Client-side attacks

The client-side attack

means that some illegitimate implementation of the external code occurs in the web application.
And the data spoofing actions have occupied the place where the user believes that the particular
data acting on the web application is valid, and it does not come from an external source.
Note: Here, Spoofing is a trick to create duplicate websites or emails.
Authentication

In this, the authentication will cover the outbreaks which aim to the web application methods of
authenticating the user identity where the user account individualities will be stolen. The
incomplete authentication will allow the attacker to access the functionality or sensitive data
without performing the correct authentication.

For example, the brute force attack, the primary purpose of brute force attack, is to gain access
to a web application. Here, the invaders will attempt n-numbers of usernames and password
repeatedly until it gets in because this is the most precise way to block brute-force attacks.

After all, once they try all defined number of an incorrect password, the account will be locked
automatically.

Authorization

The authorization comes in the picture whenever some intruders are trying to retrieve the
sensitive information from the web application illegally.

For example, a perfect example of authorization is directory scanning. Here the directory
scanning is the kind of outbreaks that deeds the defects into the webserver to achieve the illegal
access to the folders and files which are not mentioned in the pubic area.

And once the invaders succeed in getting access, they can download the delicate data and install
the harmful software on the server.

Command execution

The command execution is used when malicious attackers will control the web application.
Logical attacks

The logical attacks are being used when the DoS (denial of service) outbreaks, avoid a web
application from helping regular customer action and also restrict the application usage.

Information disclosure

The information disclosures are used to show the sensitive data to the invaders, which means that
it will cover bouts that planned to obtain precise information about the web application. Here the
information leakage happens when a web application discloses the delicate data, like the error
message or developer comments that might help the attacker for misusing the system.

For example, the password is passing to the server, which means that the password should be
encoded while being communicated over the network.

(b) List the tasks that must be performed by the developer or tested during the preparation
fort unit testing.

In order to do Unit Testing, developers write a section of code to test a specific function in
software application. Developers can also isolate this function to test more rigorously which
reveals unnecessary dependencies between function being tested and other units so the
dependencies can be eliminated. Developers generally use UnitTest framework to develop
automated test cases for unit testing.
Unit Testing is of two types

 Manual
 Automated

Unit testing is commonly automated but may still be performed manually. Software Engineering
does not favor one over the other but automation is preferred. A manual approach to unit testing
may employ a step-by-step instructional document.

Under the automated approach-

 A developer writes a section of code in the application just to test the function. They
would later comment out and finally remove the test code when the application is
deployed.
 A developer could also isolate the function to test it more rigorously. This is a more
thorough unit testing practice that involves copy and paste of code to its own testing
environment than its natural environment. Isolating the code helps in revealing
unnecessary dependencies between the code being tested and other units or data
spaces in the product. These dependencies can then be eliminated.
 A coder generally uses a UnitTest Framework to develop automated test cases. Using an
automation framework, the developer codes criteria into the test to verify the correctness
of the code. During execution of the test cases, the framework logs failing test cases.
Many frameworks will also automatically flag and report, in summary, these failed test
cases. Depending on the severity of a failure, the framework may halt subsequent testing.
 The workflow of Unit Testing is 1) Create Test Cases 2) Review/Rework 3) Baseline 4)
Execute Test Cases.

Unit Testing Techniques

The Unit Testing Techniques are mainly categorized into three parts which are Black box
testing that involves testing of user interface along with input and output, White box testing that
involves testing the functional behaviour of the software application and Gray box testing that is
used to execute test suites, test methods, test cases and performing risk analysis.
Code coverage techniques used in Unit Testing are listed below:

 Statement Coverage
 Decision Coverage
 Branch Coverage
 Condition Coverage
 Finite State Machine Coverage

4 (a) Describe the top-down and bottom-up approaches in integration testing discuss about
the merits and limitation of these approaches.

Top-down Integration Testing


o In top-down incremental integration testing, we will add the modules incrementally or
one by one and test the data flow in similar order as we can see in the below diagram:

o This testing technique deals with how higher-level modules are tested with lower-level
modules until all the modules have been tested successfully.
o In the top-down method, we will also make sure that the module we are adding is
the child of the previous one, like Child C, is a child of Child B.

o The purpose of executing top-down integration testing is to detect the significant design
flaws and fix them early because required modules are tested first.

Advantages:

 Fault Localization is easier.


 Possibility to obtain an early prototype.
 Critical Modules are tested on priority; major design flaws could be found and fixed first.

Disadvantages:

 Needs many Stubs.

Bottom Up Integration Testing

o This type of testing method deals with how lower-level modules are tested with higher-
level modules until all the modules have been tested successfully.
o In bottom-up testing, the top-level critical modules are tested, at last. Hence it may cause
a defect.
o In simple words, we can say that we will be adding the modules from the bottom to the
top and test the data flow in similar order as we can see in the below image:
In the bottom-up method, we will ensure that the modules we are adding are the parent
of the previous one as we observe in the following image:

Advantages:

 Fault localization is easier.


 No time is wasted waiting for all modules to be developed unlike Big-bang approach

Disadvantages:
 Critical modules (at the top level of software architecture) which control the flow of
application are tested last and may be prone to defects.
 An early prototype is not possible

(b) Suppose you are developing an online system for a specific vendor of the electronic
equipment with all the necessary features to run the Shop. Write down a detailed test plan
by including the necessary components

A Test Plan is a detailed document that describes the test strategy, objectives, schedule,
estimation, deliverables, and resources required to perform testing for a software product. Test
Plan helps us determine the effort needed to validate the quality of the application under test. The
test plan serves as a blueprint to conduct software testing activities as a defined process, which is
minutely monitored and controlled by the test manager.

As per ISTQB definition: “Test Plan is A document describing the scope, approach, resources,
and schedule of intended test activities.”

How to create/write a good test plan?

At this stage, you are convinced that a test plan drives a successful testing process. Now, you
must be thinking ‘How to write a good test plan?’ To create and write a good test plan you can
use a test plan software. Also, We can write a good software test plan by following the below
steps:

1. Analyze the Product

The first step towards creating a test plan is to analyze the product, its features and
functionalities to gain a deeper understanding. Further, explore the business requirements and
what the client wants to achieve from the end product. Understand the users and use cases to
develop the ability of testing the product from user’s point of view.

2. Develop Test Strategy

Once you have analyzed the product, you are ready to develop the test strategy for different test
levels. Your test strategy can be composed of several testing techniques. Keeping the use cases
and business requirements in mind, you decide which testing techniques will be used.

For example, if you are building a website which has thousands of online users, you will include
‘Load Testing’ in your test plan. Similarly, if you are working on e-commerce website which
includes online monetary transactions, you will emphasize on security and penetration testing.

3. Define Scope
A good test plan clearly defines the testing scope and its boundaries. You can use requirements
specifications document to identify what is included in the scope and what is excluded. Make a
list of ‘Features to be tested’ and ‘Features not to be tested’. This will make your test plan
specific and useful. You might also need to specify the list of deliverables as output of your
testing process.
The term ‘scope’ applies to functionalities as well as on the testing techniques. You might need
to explicitly define if any testing technique, such as security testing, is out of scope for your
product. Similarly, if you are performing load testing on an application, you need to specify the
limit of maximum and minimum load of users to be tested.

4. Develop a Schedule

With the knowledge of testing strategy and scope in hand, you are able to develop schedule for
testing. Divide the work into testing activities and estimate the required effort. You can also
estimate the required resources for each task. Now, you can include test schedule in your testing
plan which helps you to control the progress of testing process.

5. Define Roles and Responsibilities

A good test plan clearly lists down the roles and responsibilities of testing team and team
manager. The section of ‘Roles and Responsibilities’ along with ‘schedule’ tells everyone what
to do and when to do.

6. Anticipate Risks

Your test plan is incomplete without anticipated risks, mitigation techniques and risk responses.
There are several types of risks in software testing such as schedule, budget, expertise,
knowledge. You need to list down the risks for your product along with the risk responses and
mitigation techniques to lessen their intensity.

What to include in test plan?

Different people may come up with different sections to be included in testing plan. But who will
decide what is the right format? How about using IEEE Standard test plan template to assure that
your test plan meets all the necessary requirements?

Usage of standardized templates will bring more confidence and professionalism to your team.
Let’s have a look at the details to know how you can write a test plan according to IEEE 829
standard. Before that, we need to understand what is IEEE 829 standard?

IEEE 829 Standard for Test Plan


IEEE is an international institution that define standards and template documents which are
globally recognized. IEEE has defined IEEE 829 standard for system and software
documentation. It specifies that format of a set of documents that are required in each stage of
the software and system testing.

IEEE has specified eight stages in the documentation process, producing a separate
document for each stage.

According to IEEE 829 test plan standard, following sections goes into creating a testing plan:

1. Test plan identifier

As the name suggests, ‘Test Plan Identifier’ uniquely identifies the test plan. It identifies the
project and may include version information. In some cases, companies might follow a
convention for a test plan identifier. Test plan identifier also contains information of the test plan
type. There can be the following types of test plans:

 Master Test Plan: A single high level plan for a project or product that combines all
other test plans.
 Testing Level Specific Test Plans: A test plan can be created for each level of testing i.e.
unit level, integration level, system level and acceptance level.
 Testing Type Specific Test Plans: Plans for major types of testing like Performance
Testing Plan and Security Testing Plan.

Example Test Plan Identifier: ‘Master Test plan for Workshop Module TP_1.0’

2. Introduction

Introduction contains the summary of the testing plan. It sets the objective, scope, goals and
objectives of the test plan. It also contains resource and budget constraints. It will also specify
any constraints and limitations of the test plan.

3. Test items

Test items list the artifacts that will be tested. It can be one or more module of the
project/product along with their version.

4. Features to be tested
In this section, all the features and functionalities to be tested are listed in detail. It shall also
contain references to the requirements specifications documents that contain details of features to
be tested.

5. Features not to be tested

This section specifies the features and functionalities that are out of the scope for testing. It shall
contain reasons of why these features will not be tested.

6. Approach

In this section, approach for testing will be defined. It contains details of how testing will be
performed. It contains information of the sources of test data, inputs and outputs, testing
techniques and priorities. The approach will define the guidelines for requirements analysis,
develop scenarios, derive acceptance criteria, construct and execute test cases.

7. Item pass/fail criteria

This section describes a success criteria for evaluating the test results. It describes the success
criteria in detail for each functionality to be tested.

8. Suspension criteria and resumption requirements

It will describe any criteria that may result in suspending the testing activities and subsequently
the requirements to resume the testing process.

9. Test deliverables

Test deliverables are the documents that will be delivered by the testing team at the end of
testing process. This may include test cases, sample data, test report, issue log.

10. Testing tasks

In this section, testing tasks are defined. It will also describe the dependencies between any tasks,
resources required and estimated completion time for tasks. Testing tasks may include creating
test scenarios, creating test cases, creating test scripts, executing test cases, reporting bugs,
creating issue log.

11. Environmental needs


This section describes the requirements for test environment. It includes hardware, software or
any other environmental requirement for testing. The plan should identify what test equipment is
already present and what needs to be procured.

12. Responsibilities

In this section of the test plan, roles and responsibilities are assigned to the testing team.

13. Staffing and training needs

This section describes the training needs of the staff for carrying out the planned testing activities
successfully.

14. Schedule

The schedule is created by assigning dates to testing activities. This schedule shall be in
agreement with the development schedule to make a realistic test plan.

15. Risks and contingencies

It is very important to identify the risks, likelihood and impact of risks. Test plan shall also
contain mitigation techniques for the identified risks. Contingencies shall also be included in the
test plan.

16. Approvals

This section contains the signature of approval from stakeholders.


PART-A

1. Express the framework for test automation?


 Modular Based Testing Framework. It is mainly built on the concept of
abstraction. ...
 Data Driven Framework. ...
 Keyword Driven Testing Framework. ...
 Linear Automation Framework. ...
 Hybrid Testing Framework.

2. Discover the objectives of testing?

 Reducing risks, for bug-free components don't always perform well as a system.
 Preventing as many defects and critical bugs as possible by careful examination.
 Verifying the conformance of design, features, and performance with the
specifications stated in the product requirements.

3. Classify the types of test defect metrics?

 Test case execution productivity metrics.


 Test case preparation productivity metrics.
 Defect metrics.
 Defects by priority.
 Defects by severity.
 Defect slippage ratio.

4. Mention the challenges in automation?

 Effective Communicating and Collaborating in Team. This is perhaps a challenge


not just in test automation but also in manual testing teams. ...
 Selecting a Right Tool. ...
 Demanding Skilled Resources. ...
 Selecting a Proper Testing Approach. ...
 High Upfront Investment Cost.
5. Mention the criteria’s for selecting test tools?

 Flexibility and Ease of Use. ...


 Support for End-to-End Traceability. ...
 Real-time Reports and Dashboards. ...
 Support for Test Automation. ...
 Integration With Other Phases of Application Lifecycle.

6.What are the goals of Reviewers?


The goals of the peer review are 1) to help improve your classmate's paper by
pointing out strengths and weaknesses that may not be apparent to the author, and
2) to help improve editing skills.

7.Outline the need for test metrics &Give any two metrics
Software Testing Metrics are the quantitative measures used to estimate the progress,

quality, productivity and health of the software testing process. The goal of software

testing metrics is to improve the efficiency and effectiveness in the software testing

process and to help make better decisions for further testing process by providing

reliable data about the testing process.

A Metric defines in quantitative terms the degree to which a system, system

component, or process possesses a given attribute. The ideal example to understand

metrics would be a weekly mileage of a car compared to its ideal mileage

recommended by the manufacturer.

8. Define test automation

Test automation is the practice of running tests automatically, managing test data,

and utilizing results to improve software quality. It’s primarily a quality assurance

measure, but its activities involve the commitment of the entire software production

team. From business analysts to developers and DevOps engineers, getting the most out

of test automation takes the inclusion of everyone.

9.Can you show on the reason why metrics in testing?


A Metric defines in quantitative terms the degree to which a system, system

component, or process possesses a given attribute. The ideal example to understand

metrics would be a weekly mileage of a car compared to its ideal mileage

recommended by the manufacturer.

10.Distinguish between milestone and deliverable

The difference between a milestone and a deliverable is that a milestone signifies project progress towards

obtaining its end objectives, a stepping stone that must be reached in order to continue, whereas a deliverable is a

measurable result of this process.

11.What is walkthrough?

Walkthrough in software testing is used to review documents with peers, managers, and fellow team
members who are guided by the author of the document to gather feedback and reach a consensus. A
walkthrough can be pre-planned or organised based on the needs.

12. Summarize the reasons for selecting the test tool for automation

Step 1: Understand your project requirements thoroughly

Step 2: Consider your existing test automation tool as a benchmark

Step 3: Identify the key criteria suitable for a project

Step 4: Leverage Pugh Matrix Technique for Analysis

13. Classify the skills needed for automation

These skills include scripting, collaboration, source-code management, Kubernetes, security, testing,
observability, monitoring, and network awareness (among others).

14. Can you make the comparison between metrics and measurement?

Metrics and measurements are similar enough that the two terms are commonly used interchangeably.
The key difference is that a metric is based on standardized procedures, calculation methods and
systems for generating a number. A measurement could be taken with a different technique each
time.

15. What is the need of Automated testing?


Automated Testing Saves Time and Money
Manually repeating these tests is costly and time consuming. Once created, automated tests can be run over
and over again at no additional cost and they are much faster than manual tests. Automated software testing
can reduce the time to run repetitive tests from days to hours.

16. Compare product development and automation.


Development of a software results in some business value being generated when the software is put to
use. Automation of this software saves cost associated with the business.

17. Give the formula for defects per 100 hours of testing.

 Defects Deferred Percentage = (Defects deferred for future releases /Total

Defects Reported) X 100

18. Name any two software testing tools.


 TestingWhiz. ...
 HPE Unified Functional Testing (HP – UFT formerly QTP) ...
 TestComplete. ...
 Ranorex. ...
 Sahi. ...
 Watir. ...

19. What is the main plan of Test framework?


A testing framework is a set of guidelines or rules used for creating and designing test cases. A
framework is comprised of a combination of practices and tools that are designed to help QA
professionals test more efficiently.

20. Define progress Metrics.


Progress measurement involves determining and reporting on task, activity, and project progress. Performance
measurement compares this progress against defined criteria, targets, or benchmarks to assess whether a
project is over or under-performing. It involves six key metrics: CV, SV, CPI, SPI, VAC, and TCPI.

PART –B
1.Describe briefly about the various types of test automation and
scope of automation?
TEST AUTOMATION:
Automation testing, or more accurately test automation, refers to the automation of execution of
test cases and comparing their results with the expected results. That’s a standard definition that
you might find everywhere on the internet. So, let's make it more clear with an example. As you
know, manual testing, is performed by humans while writing each test case separately and then
executing them carefully, automation testing is performed with the help of an automation tool to
run the test cases.
It is widely used to automate repetitive tasks and other testing tasks that are unable to execute by
manual testing. Also, it supports both functional and non-functional testing.
But why should you use automation testing rather than manual testing? Well, there are multiple
reasons for that, such as:
 Manual testing for all the workflows and fields is very time-consuming and costly.
 Testing various sites manually is very difficult and complex.
 Manual testing requires repeated human intervention whereas automation doesn’t.
 With automation, the speed of test execution as well as test coverage increases.
These points are enough to give you an idea about why you need automated testing rather than
manual testing. However, that doesn’t mean you have to or should automate every test case; there
is a specific criterion for automating test cases.

Test Cases That Need to Be Automated


You can automate the test cases based on the below conditions, which will also help you increase
the ROI on automation.
 When there is a high risk involved, such as business-critical test cases.
 If you need to execute test cases repeatedly.
 If test cases are tedious, and you are unable to execute them manually.
 When test cases take more than expected time for execution.
In most cases, using automation is only beneficial for the above conditions, otherwise, you should
continue using manual testing.

TYPES OF AUTOMATION TEST:


After knowing the automation frameworks, you might be interested in knowing the types of automation testing.
Depending on your application, there are different types of testing that can be automated. Here, we have mentioned
the most crucial types of automation testing.

1. Unit Testing:
In unit testing, the individual components/units of a web application are tested. In general, unit tests are written by
developers, but automation testers can also write them. Unit testing of a web app is performed during the
development phase. It is also considered as the first level of web app testing.

2. Smoke Testing:
Smoke testing is performed to examine whether the deployed build is stable or not. In short,
verifying the working process of essential features so that testers can proceed with further
testing.

3. Functional Testing:
Functional testing is performed to analyze whether all the functions of your web app works
as expected or not. The sections covered in functional testing involves user interface, APIs,
database, security, client/server applications, and overall functionality of your website.

4. Integration Testing:
In integration testing, the application modules are integrated logically and then tested as a
group. It focuses on verifying the data communication between different modules of your
web app.

5. Regression Testing:
Regression testing is performed to verify that a recent change in code doesn’t affect the
existing features of your web app. In simple terms, it verifies that the old code works in the
same way as they were before making new changes.
Apart from the above testing types, there are some other automated tests as well that need
to be executed, such as data-driven testing, black box testing, keyword testing, etc.

SCOPE OF AUTOMATION TESTING:


The scope of automation is the part of your application that needs to be automated, it can
be determined with the following considerations:
 Common functionalities across the application
 Features that are crucial for your business
 Technical feasibility
 The complexity of test cases
 Capability to use similar test cases for cross browser testing
Based on these points, you can describe the scope of automation.

Ever since technology is progressing at a speedy pace, the demand for getting projects
done quicker has increased more than ever. To get projects done fast, the complete
procedures followed during a software life cycle needs to become accelerated as well. In
the area of software testing, automation can be implemented to save cost and time but
large scale testing, automation testing is the way to go. It can be a good choice.

There are a number of necessary advantages from test automation like Increases the
software quality, lessens manual software testing operations and eradicate redundant
testing efforts, create extra systematic repeatable software tests, Minimising repetitive
work and generate more consistent testing outcomes, higher consistency.

2. Discuss in detail about selecting the test tools in test automation?


 The Importance of the Software Testing Tool Selection
 Type of Test Tools
 Open-Source Tools
 Commercial Tools
 Custom Tools
 Automation Feasibility Analysis
 Tool Selection Process
 Step 1 - Identify the Requirement for Tools
 Step 2 - Evaluate the Tools and Vendors
 Step 3 - Estimate Cost and Benefit
 Step 4 - Make the Final Decision
 Things To Consider While Choosing a Test Management Tool
 Test Management Tool should Improve Productivity
 Agile Support
 External Integration
 Mobile
 Support

Type of Test Tools:


There’re many types of test tool, which Test Manager can consider when selecting the
test tools.

Open-Source Tools:
Open source tools are the program wherein the source code is openly published for
use and/or modification from its original design, free of charge.
Open-source tools are available for almost any phase of the testing process, from Test
Case management to Defect tracking. Compared to commercial tools Open source
tools may have fewer features.

Commercial Tools:
Commercial tools are the software which are produced for sale or to serve commercial
purposes.

Commercial tools have more support and more features from a vendor than open-
source tools.

Custom Tools:
In some Testing project, the testing environment, and the testing process has special
characteristics. No open-source or commercial tool can meet the requirement.
Therefore, the Test Manager has to consider the development of the custom tool.

Example: You want to find a Testing tool for the project Guru99 Bank. You want this tool
to meet some specific requirement of the project

Tool Selection Process:


To select the most suitable testing tool for the project, the Test Manager should follow
the below tools selection process

Step 1) Identify the Requirement for Tools:


How can you select a testing tool if you do not know what you are looking for?

You to precisely identify your test tool requirements. All the requirement must
be documented and reviewed by project teams and the management board.
Step 2) Evaluate the Tools and Vendors:
After baselining the requirement of the tool, the Test Manager should

 Analyze the commercial and open source tools that are available in the market,
based on the project requirement.
 Create a tool shortlist which best meets your criteria
 One factor you should consider is vendors. You should consider the vendor’s
reputation, after sale support, tool update frequency, etc. while taking your
decision.
 Evaluate the quality of the tool by taking the trial usage & launching a pilot.
Many vendors often make trial versions of their software available for download

Step 3) Estimate Cost and Benefit:


To ensure the test tool is beneficial for business, the Test Manager have to balance the
following factors:

A cost-benefit analysis should be performed before acquiring or building a tool

Example: After spending considerable time to investigate testing tools, the project team
found the perfect testing tool for the project Guru99 Bank website. The evaluation
results concluded that this tool could

 Double the current productivity of test execution


 Reduce the management effort by 30%

However, after discussing with the software vendor, you found that the cost of this tool
is too high compare to the value and benefit that it can bring to the teamwork.

In such a case, the balance between cost & benefit of the tool may affect the final
decision.

Step 4) Make the Final Decision:


To make the final decision, the Test Manager must have:

 Have a strong awareness of the tool. It means you must understand which is
the strong points and the weak points of the tool

 Balance cost and benefit.

Even with hours spent reading software manual and vendor information, you may still
need to try the tool in your actual working environment before buying the license.
You should have the meeting with the project team, consultants to get the deeper
knowledge of the tool.

Your decision may adversely impact the project, the testing process, and the business
goals; you should spend a good time to think hard about it.

3. Developing software to test the software is called test automation.


Test automation can help address several problems, Justify. Draw the
Framework for test automation?

Test Automation Framework:.


A framework is defined as a set of rules or best practices that can be followed in a
systematic way that ensures to deliver the desired results.
Typically, a broader description of test automation framework shows that it consists of a
set of processes, tools, and protocols that can be collectively used for automated testing
of software applications.
An automation testing framework is a platform developed by integrating various
hardware, software resources along with using various tools for automation testing and
web service automation framework, based on a qualified set of assumptions. This
framework enables efficient design and development of automated test scripts and
ensures reliable analysis of issues or bugs for the system or application under test
(AUT).
The important functions of software testing automation frameworks are broadly defined
as they are effectively used to identify objects and arrange them to be reused in test
scripts, perform some action on these identified objects and further also used to
evaluate these objects to get the expected results.
It can be inferred that a testing framework is an execution environment for automated
tests which revolve around a set of assumptions, concepts, and practices that
successfully support the designated process of automated testing.

The purpose of a Test Automation Framework:


– Enhances efficiency during the design and development of automated test scripts by
enabling the reuse of components or code
– Provides a structured development methodology to ensure uniformity of design across
multiple test scripts to reduce dependency on individual test-cases
– Enables reliable issue and bug detection and delivers proper root-cause analysis with
minimum human intervention for the system under test
– Reduces dependence on teams by automatically selecting the test to execute
according to test scenarios
– Refines dynamically test scope according to changes in the test strategy or conditions
of the system under test
– Improves utilization of various resources and enables maximum returns on efforts
– Ensures an uninterrupted automated testing process with little man-power
involvement

Different Types of Framework used in Automation Testing


There has been a significant evolution of these over the years and some of the
important types of these frameworks are:

Linear Scripting Framework:


This framework is based on the concept of record and playback mode that is always
achieved in a linear manner. It is more commonly named as record and playback model.
Typically, in this scripting driven framework, the creation and execution of test scripts is
done individually and this framework is an effective way to get started for enterprises.
The automation scripting is done in an incremental manner where every new interaction
will be added to the automation tests.

Modular Testing Framework:


Abstraction is the concept on which this framework is built. Based on the modules,
independent test scripts are developed to test the software. Specifically, an abstraction
layer is built for the components to be hidden from the application under test.
This sort of abstraction concept ensures that changes made to the other part of the
application does not affect the underlying components.

Data Driven Testing Framework:


In this testing framework, a separate file in a tabular format is used to store both the
input and the expected output results. In this framework, a single driver script can
execute all the test cases with multiple sets of data.
This driver script contains navigation that spreads through the program which covers
both reading of data files and logging of test status information.

Keyword Driven Testing Framework:


Keyword Driven Testing framework is an application independent framework and uses
data tables and keywords to explain the actions to be performed on the application
under test. This is more so called as keyword driven test automation framework for web
based applications and can be stated as an extension of data driven testing framework.

Hybrid Testing Framework:


This form of hybrid testing framework is the combination of modular, data-driven and
keyword test automation frameworks. As this is a hybrid framework, it has been based
on the combination of many types of end-to-end testing approaches.

Test Driven Development framework (TDD):


Test driven development is a technique of using automated unit tests to drive the design
of software and separates it from any dependencies. Earlier, with traditional testing a
successful test could find one or more defects, but by using TDD, it increases the speed
of tests and improves the confidence that system meets the requirements and is
working properly when compared to traditional testing.

Behavior Driven Development Framework (BDD):


This has been derived from the TDD approach and in this method tests are more
focussed and are based on the system behavior. In this approach, the testers can
create test cases in simple English language. This simple English language helps even
the non-technical people to easily analyse and understand the tests.

4. (a) List the generic requirements for test tool. Explain with
suitable examples?

Testing Tools:
Tools from a software testing context can be defined as a product that supports one or
more test activities right from planning, requirements, creating a build, test execution,
defect logging and test analysis.

Classification of Tools
Tools can be classified based on several parameters. They include:
 The purpose of the tool
 The Activities that are supported within the tool
 The Type/level of testing it supports
 The Kind of licensing (open source, freeware, commercial)
 The technology used

Types of Tools:

S.No. Tool Type Used for Used by

1. Test Test Managing, testers


Management scheduling,
Tool defect logging,
tracking and
analysis.
2. Configuration For All Team
management Implementation, members
tool execution,
tracking changes

3. Static Analysis Static Testing Developers


Tools

4. Test data Analysis and Testers


Preparation Design, Test data
Tools generation

5. Test Implementation, Testers


Execution Execution
Tools

6. Test Comparing All Team


Comparators expected and members
actual results

7. Coverage Provides Developers


measurement structural
tools coverage

8. Performance Monitoring the Testers


Testing tools performance,
response time

9. Project For Planning Project


planning and Managers
Tracking
Tools

10. Incident For managing the Testers


Management tests
Tools

Tools Implementation - process


 Analyse the problem carefully to identify strengths, weaknesses and
opportunities
 The Constraints such as budgets, time and other requirements are noted.
 Evaluating the options and Shortlisting the ones that are meets the requirement
 Developing the Proof of Concept which captures the pros and cons
 Create a Pilot Project using the selected tool within a specified team
 Rolling out the tool phase wise across the organization

4. (b) Why testing in metrics?Analyze about Productivity metrics?

Testing Metrics:
Testing Metrics are the quantitative measures used to estimate the progress, quality,
productivity and health of the software testing process. The goal of software testing
metrics is to improve the efficiency and effectiveness in the software testing process
and to help make better decisions for further testing process by providing reliable data
about the testing process.
A Metric defines in quantitative terms the degree to which a system, system component,
or process possesses a given attribute. The ideal example to understand metrics would
be a weekly mileage of a car compared to its ideal mileage recommended by the
manufacturer.
Productivity Metrics:
 Test case execution productivity metrics
 Test case preparation productivity metrics
 Defect metrics
 Defects by priority
 Defects by severity
 Defect slippage ratio

5. What are the challenges faced in test automation? Explain


Challenges faced in test automations are

1. Effective Communicating and Collaborating in Team:


This is perhaps a challenge not just in test automation but also in manual testing teams. However,
it is more complicated in test automation than in manual testing because it requires more
communication and collaboration in the automation team. Indeed, test automation is an
investment. Therefore, like any other investment, to get the whole team members involve in
identifying test automation objectives and setting targets, we need to spend significant efforts on
communication and provide huge evidence, historical data, and we even do a proof of concept. In
addition, to have clear purposes and goals, we necessarily keep the entire team on the same page.
Unlike manual testers, we, automation testers, not solely talk with developers, business analysts,
and project managers about the plan, scope, and timeframe but also discuss what should and
shouldn’t be automated with manual testers, developers, and technical architects.
Moreover, we have to present the cost and benefit analysis along with the Return on Investment
(ROI) analysis to the higher management team. Absolutely, without management team support,
the whole test automation effort will be put at risk. Hence, how we communicate and collaborate
among these teams and others effectively is a big challenge. Clearly, ineffective communication
and collaboration can easily turn test automation experiences into a nightmare.

2. Selecting a Right Tool:


Nowadays, there are a variety of testing tools, ranging from free and open-source tools
like Katalon and Selenium to commercial tools like TestComplete and supporting
different testing types and technologies. Each tool tends to support particular situations. Vendors
of testing products have a tendency to exaggerate the ability of their products. Vendors often
assume that they have a “secret sauce” for all automation tastes. This causes misconceptions and
confusions for us to select an appropriate testing tool satisfying our needs. Plus, many of us do
not do enough research before making a decision about tool selection, and we tend to buy
popular commercial tools quickly based on an inadequate evaluation. Remember that a sufficient
assessment includes defining a set of tool requirements criteria based on the AUT and the
experience of experts who have already used the tools considerably.
Unfortunately, people do not have enough resources to fulfill this requirement. No matter what
kind of process and testing methodology we have, if a tool does not match our technical and
business expectations, we will give up using it. Eventually, test automation will be failed and not
be applied in testing activities any longer. In my point of view, choosing a test tool is as
complicated as getting married to a person. If you marry with an inappropriate person, you tend
to break up sooner or later. Similarly, without a suitable test tool, we will deadly end up with
failed test automation effort.

3. Demanding Skilled Resources:


Some people claim that test automation can be just handled by manual testers or any technical
testers because many test tools already support recording test scripts and playing back them so
easily and quickly. This is a huge myth. In fact, test automation requires the necessary technical
skills to accurately design and maintain test automation framework and test scripts, build
solutions and resolve technical issues. Automated testing resources need to have strong
knowledge of the framework’s design and implementation. To fulfill these job requirements,
obviously, these resources need to have both strong programming skills and solid test automation
tools. On the other side, others believe that developers can entirely manage and take the test
automation responsibilities.
However, how developers can write correct test scripts at testers’ views and end-users’ needs is a
big concern although they easily develop a piece of codes in accordance with the test automation
framework. Certainly, we can utilize our resources within our test automation process to be more
effective. However, skilled resources are always of importance in test automation effort.

4. Selecting a Proper Testing Approach:


Automation tests not simply require a right tool to create scripts but also need a correct testing
approach. This is one of the biggest challenges for test automation engineers. Technically,
it’s vital for testers to find an appropriate test automation approach. In order to do so, they have
to answer several important questions: How to reduce effort in both implementation and
maintenance of test script and test suite? Will automation test suites be having a long lifetime?
How to generate useful test reports and metrics? With adopting the Agile development in recent
years, the application under test often changes through development cycles.
Therefore, how to design and implement automation test suites to correctly identify these
changes and keep up-to-date quickly with reasonable maintenance effort. It is ideal to have a test
automation solution that can detect these issues to automatically update and re-validate the test
without any human intervention. Definitely, it’s not easy to address these difficult questions.

5. High Upfront Investment Cost:


Talking about test automation, most of us agree that automated regression testing is crucial and
useful in most Agile contexts. But when turning into the cost, we have many concerns. As a
matter of fact, the initial phase of test automation is usually expensive. It’s necessary to analyze,
design, and build a test automation framework, libraries or reusable functions, etc. In some cases,
it is required to take into account licensing costs, facilitating and operating costs such as
hardware and software costs.
Moreover, even though we can use free open-source tools to reduce the licensing costs, we might
spend significant efforts on learning, training, and maintaining them. Furthermore, we also take
hidden costs into consideration. How to account for hidden costs such as meeting,
communicating, and collaborating. As a result, how we can ensure that these things cannot affect
our decisions. Although there is a huge payoff in the long run after running some regression
testing cycles, convincing the stakeholders to have a consensus about this investment is a big
challenge. Actually, just due to budget constraints, many people tend to give up test automation
even though they agree with an executable goal and high ROI.
6. (a) Identify what are the key benefits in using metrics in
product development and testing?

Software Testing Metrics are useful for evaluating the health, quality, and progress of a software
testing effort. Without metrics, it would be almost impossible to quantify, explain, or demonstrate
software quality. Metrics also provide a quick insight into the status of software testing efforts,
hence resulting in better control through smart decision making. Traditional Software testing metrics
dealt with the ones’ based on defects that were used to measure the team’s effectiveness. It usually
revolved around the number of defects that got leaked to production named as Defect Leakage or
the Defects that were missed during a release, reflects the team’s ability and product knowledge.
The other team metrics was with respect to percentage of valid and invalid defects. These metrics
can also be captured at an individual level, but generally are measured at a team level.

Software Testing Metrics had always been an integral part of software testing projects, but the
nature and type of metrics collected and shared have changed over time. Top benefits of tracking
software testing metrics include the following:

 Helps achieve cost savings by preventing defects


 Helps improve overall project planning
 Facilitates to understand if the desired quality is achieved
 Enforces keenness to further improve the processes
 Helps to analyze the risks associated in a deeper way
 Helps to analyze metrics in every phase of testing to improve defect removal efficiency
 Improves Test Automation ROI over a time period
 Enforces better relationship between testing coverage, risks and complexities of the systems

Divisions of Common Software Testing Metrics


Some of the most common software testing metrics that can be used for both automation and for
software testing in general are given below. These are useful in the broader spectrum of software
testing.

The general software testing metrics are divided into the following three
categories:
 Coverage: It refers to the meaningful parameters for measuring test scope and test success

 Progress: Deals with the parameters that help identify test progress to be matched against
success criteria. This metrics is collected iteratively over time and measures metrics like Time
to fix defects, Time to test, etc.

 Quality: is used to obtain meaningful measures of excellence, worth, value, etc. of the
testing product and it is difficult to measure it directly
6 (b) What are the steps involved in a metrics program. Briefly explain
each step?

Step 1- Identify Metrics Customer:


Person/people that will use the metric. Customers may include: functional management,
project management, software engineers/programmers, tests managers/testers,
specialists (marketing, software quality assurance, support, etc.), customers/users.
Step 2 – Target Goals:
This step selects one or more measurable goals (e.g. on-time delivery, delivering the
software with required level of quality/performance, etc.).
Step 3 – Ask Question:
Define questions that need to be answered in order to ensure goals are obtained.
Step 4 – Select Metrics
:Select the metrics that provide the information needed to answer these questions,
answers that give objectives to the selected metrics.
An individual metrics performs one of four functions:

 Understand software process, product, services.


 Evaluate against established standards and goals.
 Control resources and processes.
 Predict attributes of software.

Step 5 – Standardized Defintions:


Either use standardized definitions or create your own (but make them explicit and
clear).
Step 6 – Choose an measurement functions:
"If we try to include of all the elements that affect the attribute or characterise the entity,
our model can become so complicated that it's useless. Being pragmatic means not
trying to create the most comprehensive model."
Step 7 – Establish a measurement method:
Define Base Measures, Units, and how they are to be calculated.
Example: SLOC is well-known and widely accepted, but there is no industry-accepted
standard on how to count lines of code.
Once selected, communicate it so others won't misunderstand or misuse it.
Step 8 – Define definitions criteria:
According to the ISO 15939 standard, decision criteria are the "thresholds, targets, or
patterns used to determine the need for action or further investigation, ot to describe the
level of confidence in a given result.
In other words, you need decision criteria to obtain guidance that will help you interpret
results.

 Decision criteria for control type metrics usually take the form of thresholds,
variances or control limits.
 Evaluate type metrics (i.e. what good?) may be: "no more than x% failures, with
2/3 minor and 1/3 major".
 For predict and evaluate metrics, it is the "level of confidence in a given result"
part of the standard that applies.
Step 9 – Define report mechanism:
This includes defining the report format (table, charts, etc.), data extraction and
reporting cycle (how often data are extracted and the report generated), reporting
mechanisms (the way the report is delivered (hard copy, email, published, etc.),
distribution (who receives the report), and availability (restrictions on metrics access).

7. How do you calculate defect density and defect removal rate?


Discuss ways to improve these rates for a better quality product?

Software is tested based on its quality, scalability, features, security, and performance,
including other essential elements. It's common to detect defects and errors in a
software testing process. However, developers must ensure they are taken care of
before launching it to the end-users. This is because fixing an error at an early stage will
cost significantly less than rectifying it at a later stage.
The process of defect detection ensures developers that the end product comprises all
the standards and demands of the client. To ensure the perfection of software, software
engineers follow the defect density formula to determine the quality of the software.

More Defects = Lower Quality


Defect Density in software testing:
Defect density is numerical data that determines the number of defects detected in
software or component during a specific development period. It is then divided by the
size of the software. In short, it is used to ensure whether the software is released or not.

The role of defect density is extremely important in Software Development Life Cycle
(SDLC). First, it is used to identify the number of defects in software. Second, this gives
the testing team to recruit an additional inspection team for re-engineering and
replacements.

Defect density also makes it easier for developers to identify components prone to
defects in the future. As a result, it allows testers to focus on the right areas and give
the best investment return at limited resources.

How to Calculate Defect Density?


The defect density is calculated by dividing the 'total defects' of software by its 'Size.'

Defect Density = Total Defect/Size


According to best practices, one defect per 1000 lines (LOC) is considered good. Such
standard of defect density is called KLOC. The size of the software or code is
expressed in Function Points (FP).

Steps to calculate Defect Density:


Collect the total defects detected during the software development process

Calculate Defect Density = Average number of Defects/KLOC


Let's understand it with an example −

Let's say your software comes with five integrated modules.

 Module 1 = 5 bugs

 Module 2= 10 bugs

 Module 3= 20 bugs

 Module 4= 15 bugs
 Module 5= 5 bugs

 Total bugs = 5+10+20+15+5= 55

Now total line of code for each module is

 Module 1= 500 LOC

 Module 2= 1000 LOC

 Module 3= 1500 LOC

 Module 4= 1500 LOC

 Module 5= 1000 LOC

Total Line of Code = 500 + 1000 + 1500 + 1500 + 1000 = 5500

Defect Density = 55/5500 = 0.01 defects/LOC or 10 defects/KLOC

Uses of Defect Density:


Defect density is considered an industry standard for software and its component
development. It comprises a development process to calculate the number of defects
allowing developers to determine the weak areas that require robust testing.

Organizations also prefer defect density to release a product subsequently and


compare them in terms of performance, security, quality, scalability, etc. Once defects
are tracked, developers start to make changes to reduce those defects. The defect
density process helps developers to determine how a reduction affects the software
quality-wise.

The use of defect density is inconsiderable in many ways. However, once developers
set up common defects, they can use this model to predict the remaining defects. Using
this method, developers can establish a database of common defect densities to
determine the productivity and quality of the product.

Factors Affecting Defect Density Metrics:


As we know, defect density is measured by dividing total defects by the
size of the software. The goal is not about detecting the defects but to
detect defects that actually matter. Therefore, it's crucial to understand the
factors that result in an efficient outcome. Developers and the testing team
need to arrange all the necessary conditions before initiating this process.
This helps developers trace the affected areas properly, allowing them to
achieve highly accurate results.
Factors that affect defect density are −
 Types of defects
 Critical and complexity of the code used
 Skills of the developer and testing team
 Time allocated to calculate the defect density
Above all, the efficiency and performance of the software remain the
biggest factor that affects the defect density process.

8.Explain the different types of Test defect metrics under Progress metrics
based on what they measure and what area they focus on.
The test progress metrics discussed in the previous section capture the progress of defects found with
time. The next set of metrics help us understand how the defects that are found can be used to
improve testing and product quality. Not all defects are equal in impact or importance. Some
organizations classify defects by assigning a defect priority (for example, P1, P2, P3, and so on). The
priority of a defect provides a management perspective for the order of defect fixes. For example, a
defect with priority P1 indicates that it should be fixed before another defect with priority P2. Some
organizations use defect severity levels (for example, S1, S2, S3, and so on). The severity of defects
provides the test team a perspective of the impact of that defect in product functionality. For example,
a defect with severity level S1 means that either the major functionality is not working or the
software is crashing. S2 may mean a failure or functionality not working. A sample of what different
priorities and severities mean is given in Table 17.3. From the above example it is clear that priority
is a management perspective and priority levels are relative. This means that the priority of a defect
can change dynamically once assigned. Severity is absolute and does not change often as they reflect
the state and quality of the product. Some organizations use a combination of priority and severity to
classify the defects.

Priority What it means


1 Fix the defect on highest priority; fix it before the next build
2 Fix the defect on high priority before next test cycle
3 Fix the defect on moderate priority when time permits, before the release
4 Postpone this defect for next release or live with this defect

Severity What it means

1 The basic product functionality failing or product crashes


2 Unexpected error condition or a functionality not working
3 A minor functionality is failing or behaves differently than expected
4 Cosmetic issue and no impact on the users
Since different organization use different methods of defining priorities and severities, a common set
of defect definitions and classification are provided in Table 17.4 to take care of both priority and
severity levels. We will adhere to this classification consistently in this chapter

Defect classification What it means


Extreme
Product crashes or unusable
Needs to be fixed immediately

Critical
Basic functionality of the product not
working
Needs to be fixed before next test
cycle starts

Important
Extended functionality of the product
not working
Does not affect the progress of
testing
Fix it before the release

Minor
Product behaves differently
No impact on the test team or
customers
Fix it when time permits

Cosmetic
Minor irritant
Need not be fixed for this release

9. Explain the various generations of automation and the required skills for
each.

There are different "Generations of Automation." The skills required for automation depends on what
generation of automation the company is in or desires to be in the near future.
The automation of testing is broadly classified into three generations.
First generation—Record and Playback Record and playback avoids the repetitive nature of
executing tests. Almost all the test tools available in the market have the record and playback feature.
A test engineer records the sequence of actions by keyboard characters or mouse clicks and those
recorded scripts are played back later, in the same order as theywere recorded. Since a recorded
script can be played back multiple times, it reduces the tedium of the testing function. Besides
avoiding repetitive work, it is also simple to record and save the script. But this generation of tool has
several disadvantages. The scripts may contain hard-coded values, thereby making it difficult to
perform general types of tests. For example, when a report has to use the current date and time, it
becomes difficult to use a recorded script. The handling error condition is left to the testers and thus,
the played back scripts may require a lot of manual intervention to detect and correct error conditions.
When the application changes, all the scripts have to be rerecorded, thereby increasing the test
maintenance costs. Thus, when there is frequent change or when there is not much of opportunity to
reuse or re-run the tests, the record and playback generation of test automation tools may not be very
effective.
Second generation—Data-driven This method helps in developing test scripts that generates the set
of input conditions and corresponding expected output. This enables the tests to be repeated for
different input and output conditions. The approach takes as much time and effort as the product.
However, changes to application does not require the automated test cases to be changed as long as
the input conditions and expected output are still valid. This generation of automation focuses on
input and output conditions using the black box testing approach.
Automation bridges the gap in skills requirement between testing and development; at times it
demands more skills for test teams.
Third generation—Action-driven This technique enables a layman to create automated tests. There
are no input and expected output conditions required for running the tests. All actions that appear on
the application are automatically tested, based on a generic set of controls defined for automation.
The set of actions are represented as objects and those objects are reused. The user needs to specify
only the operations (such as
310
log in, download, and so on) and everything else that is needed for those actions are automatically
generated. The input and output conditions are automatically generated and used. The scenarios for
test execution can be dynamically changed using the test framework that is available in this approach
of automation. Hence, automation in the third generation involves two major aspects—"test case
automation” and “framework design.” We will see the details of framework design in the next
section.
From the above approaches/generations of automation, it is clear that different levels of skills are
needed based on the generation of automation selected. The skills needed for automation are
classified into four levels for three generations as the third generation of automation introduces two
levels of skills for development of test cases and framework

10. What are metrics and measurements? Illustrate the types of product
metrics

Software Measurement: A measurement is a manifestation of the size, quantity, amount or dimension of


a particular attribute of a product or process. Software measurement is a titrate impute of a characteristic
of a software product or the software process. It is an authority within software engineering. The software
measurement process is defined and governed by ISO Standard.

Need of Software Measurement:


Software is measured to:

1. Create the quality of the current product or process.


2. Anticipate future qualities of the product or process.
3. Enhance the quality of a product or process.
4. Regulate the state of the project in relation to budget and schedule.
Classification of Software Measurement:
There are 2 types of software measurement:

1. Direct Measurement:
In direct measurement the product, process or thing is measured directly using standard scale.
2. Indirect Measurement:
In indirect measurement the quantity or quality to be measured is measured using related
parameter i.e. by use of reference.

Metrics:
A metric is a measurement of the level that any impute belongs to a system product or process. There are
4 functions related to software metrics:

1. Planning
2. Organizing
3. Controlling
4. Improving

Characteristics of software Metrics:

1. Quantitative:
Metrics must possess quantitative nature.It means metrics can be expressed in values.
2. Understandable:
Metric computation should be easily understood ,the method of computing metric should be
clearly defined.
3. Applicability:
Metrics should be applicable in the initial phases of development of the software.
4. Repeatable:
The metric values should be same when measured repeatedly and consistent in nature.
5. Economical:
Computation of metrics should be economical.
6. Language Independent:
Metrics should not depend on any programming language.

Classification of Software Metrics:


There are 3 types of software metrics:

1. Product Metrics:
Product metrics are used to evaluate the state of the product, tracing risks and undercovering
prospective problem areas. The ability of team to control quality is evaluated.
2. Process Metrics:
Process metrics pay particular attention on enhancing the long term process of the team or
organization.
3. Project Metrics:
The project matrix describes the project characteristic and execution process.
 Number of software developer
 Staffing pattern over the life cycle of software
 Cost and schedule

11. What is the purpose of progress metrics? Describe in detail.


Any project needs to be tracked from two angles. One, how well the project is doing with respect to effort
and schedule. This is the angle we have been looking at so far in this chapter. The other equally important
angle is to find out how well the product is meeting the quality requirements for the release. There is no
point in producing a release on time and within the effort estimate but with a lot of defects, causing the
product to be unusable. One of the main objectives of testing is to find as many defects as possible before
any customer finds them. The number of defects that are found in the product is one of the main
indicators of quality. Hence in this section, we will look at progress metrics that reflect the defects (and
hence the quality) of a product.

Defects get detected by the testing team and get fixed by the development team. In line with this thought,
defect metrics are further classified in to test defect metrics (which help the testing team in analysis of
product quality and testing) and development defect metrics (which help the development team in analysis
of development activities).

How many defects have already been found and how many more defects may get unearthed are two
parameters that determine product quality and its assessment. For this assessment, the progress of testing
has to be understood. If only 50% of testing is complete and if 100 defects are found, then, assuming that
the defects are uniformly distributed over the product (and keeping all other parameters same), another
80–100 defects can be estimated as residual defects. Figure 17.6 shows testing progress by plotting the
test execution status and the outcome.

The progress chart gives the pass rate and fail rate of executed test cases, pending test cases, and test
cases that are waiting for defects to be fixed. Representing testing progress in this manner will make it is
easy to understand the status and for further analysis. In Figure 17.6, (coloured figure is available on
Illustrations) the “not run” cases reduce in number as the weeks progress, meaning that more tests are
being run. Another perspective from the chart is that the pass percentage increases and fail percentage
decreases, showing the positive progress of testing and product quality. The defects that are blocking the
execution of certain test cases also get reduced in number as weeks progress in the above chart. Hence, a
scenario represented by such a progress chart shows that not only is testing progressing well, but also that
the product quality is improving (which in turn means that the testing is effective). If, on the other hand,
the chart had shown a trend that as the weeks progress, the “not run” cases are not reducing in number, or
“blocked” cases are increasing in number, or “pass” cases are not increasing, then it would clearly point
to quality problems in the product that prevent the product from being ready for release.

12. Describe about the various components of Test automation

Components of test automation infrastructure:


 A test automation infrastructure, or framework, consists of test tools, equipment, test scripts,
procedures, and people needed to make test automation efficient and effective.
 The creation and maintenance of test automation framework are key to the success of any test
automation project within an organization.
 The idea behind an automation infrastructure is to ensure the following:

1. Different test tools and equipment are coordinated to work together.


2. The library of the existing test case scripts can be reused for different test projects, thus
minimizing the duplication of development effort.
3. Nobody creates test scripts in their own way.
4. Consistency is maintained across test scripts.
5. The test suite automation process is coordinated such that it is available just in time for regression
testing
6. People understand their responsibilities in automated testing.

 The six components of test automation infrastructure are as follows:

1. System to be tested :

o This is the first component of an automation infrastructure. The subsystem of the system
to be tested must be stable, otherwise , test automation will not be cost effective.

2. Test Platform :

o The test platform and facilities, that is, the network setup, on which the system will be
tested, must be in place to carry out the test automation project.
o For example, configuration management utilities, servers, clients, routers and switches
and hubs are necessary to set up the automation environment to execute the test scripts.
3. Test Case Library :

o It is useful to compile libraries of reusable test steps of basic utilities to be used as the
building blocks of automated test scripts.
o Each utility typically performs a distinct task to assist the automation of test cases.
o Examples of such utilities are ssh (secure shell) from client to server, response capture,
error logging, clean up and setup.

4.Tools :

o Different types of tools are required for the development of test scripts.
o Examples of such tools are test automation tools, traffic generation tool, traffic
monitoring tool and support tool.
o The support tools include test factory, requirement analysis, defect tracking, and
configuration management tools.
o Integration of test automation and support tools is critical for the automatic reporting of
defects for failed test cases.
o Similarly, the test factory tool can generate automated test execution trends and result
patterns.

5. Automated Testing Practices :

o The procedures describing how to automate test cases using test tools and test case
libraries must be documented.
o A template of an automated test case is useful in order to have consistency across all the
automated test cases developed by different engineers.
o A list of all the utilities and guidelines for using them will enable us to have better
efficiency in test automation.
o In addition, the maintenance procedure for the library must be documented.

6. Administrator :

o The automation framework administrator (i) manages test case libraries, test platforms
and test tools (ii) maintains the inventory of templates, (iii)provides tutorials, (iv) helps
test engineers in writing test scripts using the test case libraries.
o In addition the administrator provides tutorial assistance to the users of test tools and
maintains a liaison with the tool vendors and the users.

13. Write short notes on following.

(a) Classifications of automation testing

Classifications of automation testing:

Functional testing
Functional testing assesses the software against the set functional requirements/specifications. It
focuses on what the application does and mainly involves black box testing.

Black box testing is also known as behavioral testing and involves testing functionality of elements
without delving into its inner workings. This means that the tester is completely unaware of the
structure or design of the item being tested.

Functional testing focuses primarily on testing the main functions of the system, its basic usability,
its accessibility to users, and the like. Unit testing, integration testing, smoke testing, and user
acceptance testing are all examples of functional testing.

Unit testing
Unit testing involves running tests on individual components or functions in isolation to verify that
they are working as required. It is typically done in the development phase of the application and is
therefore often the first type of automated testing done on an application.

Unit testing is usually performed by the developer and always comes before integration testing.

Unit tests are extremely beneficial because they help identify bugs early in the development phase,
keeping the cost of fixing them as low as possible.
nit-testing techniques can be broken down into three broad categories:
 Black box testing: This involves UI testing along with input and output.
 White box testing: This tests the functional behavior of the application
 Gray box testing: This testing involves executing test cases, test suites, and performing risk
analysis.

Integration testing
Integration testing involves testing all the various units of the application in unity. It focuses on
evaluating whether the system as a whole complies with the functional requirements set for it.

Integration testing works by studying how the different modules interact with each other when
brought together.

Integration testing typically follows unit testing and helps ensure seamless interaction between the
various functions to facilitate a smooth functioning software as a whole.

There are various approaches to integration testing such as the Big Bang Approach, the Top-Down
Approach, the Bottom-Up Approach, and the Sandwich Approach.

Non-functional testing
This testing encompasses testing all the various non-functional elements of an application such as
performance, reliability, usability, etc.

It is different from functional testing in that it focuses on not what the product does but how well it
does it.

Typically, non-functional testing follows functional testing because it is only logical to know that the
product does what it is supposed to before investigating how well it does it.

Some of the most common types of non-functional testing include performance testing, reliability
testing, security testing, load testing, scalability testing, compatibility testing, etc.

(ii)
Since the time innovation is advancing at a rapid movement, the interest for completing ventures faster
has expanded like never before. To complete ventures quickly, the total systems followed during a
product life cycle needs to be quickened also. In the territory of programming testing, computerization
can be executed to save cost and time yet just when utilized in time-taking tasks. About performing
relapse testing, huge scope testing, automated testing is the best approach. It tends to be a decent decision.

There are various vital preferences from test mechanization like Increases the product quality, diminishes
manual programming testing tasks and destroys excess testing endeavors, make extra deliberate
repeatable programming tests, Minimizing monotonous work and create more reliable testing results,
higher consistency.

The extent of automation implies the territory of your Application under Test that will be computerized.
Ensure you have strolled through and realize accurately your group’s test express, the measure of test
information, likewise the climate where tests occur. The following are extra hints assisting you with
deciding the degree:

 Specialized achievability
 The complexity of test cases
 The highlights or capacities that are significant for the business
 The degree to which business parts are reused
 The capacity to test similar experiments for cross-browser testing

14. Outline project, product and productivity metrics with relevant examples.
.
PART – C

1. a) Explain the design and architecture for automation

An automation framework is a platform developed by integrating various hardware and


software resources and various tools and services based on a qualified set of set of
assumptions. It enables efficient design and development of automated test scripts and
reliable analysis of issues for the system under test.
An automation framework is primarily designed to do the following:
 Enhance efficiency during the design and development of automated test scripts by
enabling the reuse of components or code
 Provide a structured development methodology to ensure uniformity of design across
multiple test scripts to reduce dependency on individual test-case developers
 Provide reliable issue detection and efficient root-cause analysis with minimum human
intervention for the system under test
 Reduce dependence on subject matter experts by automatically selecting the test to execute
according to test scenarios and dynamically refining the test scope according to changes in
the test strategy or conditions of the system under test
 Improve the utilization of various resources in each cycle to enable maximum returns on
effort and also ensure an uninterrupted automated testing process with minimum human
intervention

The Automation Framework Design Challenge: Balance Quality, Time, and Resources
The challenge is to build a fit-for-purpose automation framework that is capable of keeping
up with quickly changing automation testing technologies and changes in the system under
test. The challenge is accentuated by the various combinations that are possible using the
wide gamut of available automation tools. Making the right choices in the preliminary design
stage is the most critical step of the process, since this can be the differentiator between a
successful framework and failed investment.
As if this were not tough enough, add to this the even more formidable challenge of
balancing the quality of the framework against the desired utility and the need to develop the
framework within a stipulated timeframe using available resources to ensure the economic
viability of the solution. Therefore, it is very important to benchmark the framework, the
associated development time, and the required resources to ensure the framework's quality
justifies the use of the framework.

B) List and discuss ,how the metrics that can be used for defect prevention.

Defect Prevention is basically defined as a measure to ensure that defects being detected so
far, should not appear or occur again. For facilitating communication simply among
members of team, planning and devising defect prevention guidelines, etc., Coordinator is
mainly responsible.
Coordinator is mainly responsible to lead defect prevention efforts, to facilitate meetings, to
facilitate communication between team members and management, etc. DP board generally
has quarterly plan in which sets some goals at organization level. To achieve these goals,
various methods or activities are generally used and carried out to achieve and complete
these goals.
Methods of Defect Prevention :
For defect prevention, there are different methods that are generally used over a long period
of time. These methods or activities are given below :

1. Software Requirement Analysis :


The main cause of defects in software products is due to error in software requirements
and designs. Software requirements and design both are important, and should be
analyzed in an efficient way with more focus. Software requirement is basically
considered an integral part of Software Development Life Cycle (SDLC). These are the
requirements that basically describes features and functionalities of target product and
also conveys expectations or requirement of users from software product.
Therefore, it is very much needed to understand about software requirements more
carefully, If requirements are not understood well by tester and developers, then there
might be chance of occurring of issue or defect in further process. Therefore, it is
essential to analyze and evaluate requirements in more appropriate and proper manner.
2. Review and Inspection :
Review and inspection, both are essential and integral part of software development.
They are considered as powerful tools that can be used to identify and remove defects if
present before there occurrence and impact on production. Review and inspection come
in different levels or stages of defect prevention to meet different needs. They are used
in all software development and maintenance methods. There are two types of review
i.e. self-review and peer-review.
3. Defect Logging and Documentation :
After successful analysis and review, there should be records maintained about defects
to simply complete description of defect. This record can be further used to have better
understanding of defects. After getting knowledge and understanding of defect, then
only one can take some effective and required measures and actions to resolve
particular defects so that defect cannot be carried further to next phase.
4. Root Cause Analysis :
Root cause analysis is basically analysis of main cause of defect. It simply analysis
what triggered defect to occur. After analyzing main cause of defect, one can find best
way to simply avoid occurrence of such types of defects next time.

2. (a) List the requirements for test tool. Explain any five requirements with a suitable
example.

Software testing tools are required for the betterment of the application or software.
That's why we have so many tools available in the market where some are open-source and
paid tools.
The significant difference between open-source and the paid tool is that the open-source tools
have limited features, whereas paid tool or commercial tools have no limitation for the
features. The selection of tools depends on the user's requirements, whether it is paid or free.
The software testing
tools can be categorized, depending on the licensing (paid or commercial, open-source),
technology usage, type of testing, and so on.
With the help of testing tools, we can improve our software performance, deliver a high-
quality product, and reduce the duration of testing, which is spent on manual efforts.
The software testing tools can be divided into the following:
o Test management tool
o Bug tracking tool
o Automated testing tool
o Performance testing tool
o Cross-browser testing tool
o Integration testing tool
o Unit testing tool
o Mobile/android testing tool
o GUI testing tool
o Security testing tool
Test management tool
Test management tools are used to keep track of all the testing activity, fast data analysis,
manage manual and automation test cases, various environments, and plan and
maintain manual testing
as well.

Bug tracking tool


The defect tracking tool is used to keep track of the bug fixes and ensure the delivery of a
quality product. This tool can help us to find the bugs in the testing stage so that we can get
the defect-free data in the production server. With the help of these tools, the end-users can
allow reporting the bugs and issues directly on their applications.
Automation testing tool
This type of tool is used to enhance the productivity of the product and improve the accuracy.
We can reduce the time and cost of the application by writing some test scripts in any
programming language.

Performance testing tool


Performance or Load testing tools are used to check the load, stability, and scalability of the
application. When n-number of the users using the application at the same time, and if the
application gets crashed because of the immense load, to get through this type of issue, we
need load testing tools.

Cross-browser testing tool


This type of tool is used when we need to compare a web application in the various web
browser platforms. It is an important part when we are developing a project. With the help of
these tools, we will ensure the consistent behavior of the application in multiple devices,
browsers, and platforms.
For more details about the cross-browser testing tool, refers the below link: Click Here

Integration testing tool


This type of tool is used to test the interface between modules and find the critical bugs that
are happened because of the different modules and ensuring that all the modules are working
as per the client requirements.
Unit testing tool
This testing tool is used to help the programmers to improve their code quality, and with the
help of these tools, they can reduce the time of code and the overall cost of the software.
Mobile/android testing tool
We can use this type of tool when we are testing any mobile application. Some of the tools
are open-source, and some of the tools are licensed. Each tool has its functionality and
features.

GUI testing tool


GUI testing tool is used to test the User interface of the application because a proper GUI
(graphical user interface) is always useful to grab the user's attention. These type of tools will
help to find the loopholes in the application's design and makes its better.

Security testing tool


The security testing tool is used to ensure the security of the software and check for the
security leakage. If any security loophole is there, it could be fixed at the early stage of the
product. We need this type of the tool when the software has encoded the security code which
is not accessible by the unauthorized users.

(b) Explain the components of review plans

Components of review plans

Reviews are development and maintenance activities that require time and resources. They
should be planned so that there is a place for them in the project schedule. An organization
should develop a review plan template that can be applied to all software projects. The
template should specify the following items for inclusion in the review plan.

• review goals;

• items being reviewed;

• preconditions for the review;


• roles, team size, participants;

• training requirements;
• review steps;

checklists and other related documents to be disturbed to participants;

• time requirements;

• the nature of the review log and summary report;

• rework and follow-up.

We will now explore each of these items in more detail.


Review Goals

As in the test plan or any other type of plan, the review planner should specify the goals to be
accomplished by the review. Some general review goals have been stated in Section 9.0 and
include (i) identification of problem components or components in the software artifact that
need improvement, (ii) identification of specific errors or defects in the software artifact, (iii)
ensuring that the artifact conforms to organizational standards, and (iv) communication to the
staff about the nature of the product being developed. Additional goals might be to establish
traceability with other project documents, and familiarization with the item being reviewed.
Goals for inspections and walkthroughs are usually different; those of walkthroughs are more
limited in scope and are usually confined to identification of defects.
Pre conditions and Items to Be Reviewed

Given the principal goals of a technical review—early defect detection, identification of


problem areas, and familiarization with software artifacts— many software items are
candidates for review. In many organizations the items selected for review include:

• requirements documents;

• design documents;

• code;
• test plans (for the multiple levels);

• user manuals;training manuals;


• standards documents.

Note that many of these items represent a deliverable of a major life cycle phase. In fact,
many represent project milestones and the review serves as a progress marker for project
progress. Before each of these items are reviewed certain preconditions usually have to be
met. For example, before a code review is held, the code may have to undergo a successful
compile. The preconditions need to be described in the review policy statement and specified
in the review plan for an item. General preconditions for a review are:

(i) the review of an item(s) is a required activity in the project plan.


(Unplanned reviews are also possible at the request of management, SQA or software
engineers. Review policy statements should include the conditions for holding an
unplanned review.)
(ii) a statement of objectives for the review has been developed;

(iii) the individuals responsible for developing the reviewed item indicate
readiness for the review;

(iv) the review leader believes that the item to be reviewed is sufficiently complete for the
review to be useful [8].
The review planner must also keep in mind that a given item to be reviewed may be too large
and complex for a single review meeting. The smart planner partitions the review item into
components that are of a size and complexity that allows them to be reviewed in 1-2 hours.
This is the time range in which most reviewers have maximum effectiveness. For example,
the design document for a procedure-oriented system may be reviewed in parts that
encompass:

(i) the overall architectural design;

(ii) data items and module interface design;

(iii)component design.

If the architectural design is complex and/or the number of components is large, then multiple
design review sessions should be scheduled for each. The project plan should have time
allocated for this.

3. Assume you are working in an on-line fast food restaurant system. The system reads
customer orders. Relays orders to the kitchen, calculates the customer’s bill and give change.
It also maintains inventory information. Each wait person has a terminal. Only authorized
wait persons and a system administrator can access the system. Describe the tests that are
suitable to the test the application.

4. (a) Explain the five stop test criteria that are based on quantitative approach.

In the test plan the test manager describes the items to be tested, test cases, tools needed,
scheduled activities, and assigned responsibilities. As the testing effort progresses many
factors impact on planned testing schedules and tasks in both positive and negative ways. For
example, although a certain number of test cases were specified, additional tests may be
required. This may be due to changes in requirements, failure to achieve coverage goals, and
unexpected high numbers of defects in critical modules. Other unplanned events that impact
on test schedules are, for example, laboratories that were supposed to be available are not
(perhaps because of equipment failures) or testers who were assigned responsibilities are
absent (perhaps because of illness or assignments to other higherpriority projects). Given
these events and uncertainties, test progress does not often follow plan. Tester managers and
staff should do their best to take actions to get the testing effort on track. In any event,
whether progress is smooth or bumpy, at some point every project and test manager has to
make the decision on when to stop testing. Since it is not possible to determine with certainty
that all defects have been identified, the decision to stop testing always carries risks. If we
stop testing now, we do save resources and are able to deliver the software to our clients.
However, there may be remaining defects that will cause catastrophic failures, so if we stop
now we will not find them. As a consequence, clients may be unhappy with our software and
may not want to do business with us in the future. Even worse there is the risk that they may
take legal action against us for damages. On the other hand, if we continue to test, perhaps
there are no defects that cause failures of a high severity level. Therefore, we are wasting
resources and risking our position in the market place. Part of the task of monitoring and
controlling the testing effort is making this decision about when testing is complete under
conditions of uncertainly and risk. Managers should not have to use guesswork to make this
critical decision. The test plan should have a set of quantifiable stop-test criteria to support
decision making. The weakest stop test decision criterion is to stop testing when the project
runs out of time and resources. TMM level 1 organizations often operate this way and risk
client dissatisfaction for many projects. TMM level 2 organizations plan for testing and
include stop-test criteria in the test plan. They have very basic measurements in place to
support management when they need to make this decision. Shown in Figure 9.6 and
described below are five stop-test criteria that are based on a more quantitative approach. No
one criteria is recommended. In fact, managers should use a combination of criteria and
cross-checking for better results. The stop-test criteria are as follows.

1. All the Planned Tests That Were Developed Have Been Executed and Passed.

This may be the weakest criterion. It does not take into account the actual dynamics of the
testing effort, for example, the types of defects found and their level of severity. Clues from
analysis of the test cases and defects found may indicate that there are more defects in the
code that the planned test cases have not uncovered. These may be ignored by the testers if
this stop-test criteria is used in isolation.

2 . All Specified Coverage Goals Have Been Met.

An organization can stop testing when it meets its coverage goals as specified in the test plan.
For example, using white box coverage goals we can say that we have completed unit test
when we have reached 100% branch coverage for all units. Using another coverage category,
we can say we have completed system testing when all the requirements have been covered
by our tests. The graphs prepared for the weekly status meetings can be applied here to show
progress and to extrapolate to a completion date. The graphs will show the growth of degree
of coverage over the time.
3 . The Detection of a Specific Number of Defects Has Been Accomplished.

This approach requires defect data from past releases or similar projects. The defect
distribution and total defects is known for these projects, and is applied to make estimates of
the number and types of defects for the current project. Using this type of data is very risky,
since it assumes the current software will be built, tested, and behave like the past projects.
This is not always true. Many projects and their development environments are not as similar
as believed, and making this assumption could be disastrous. Therefore, using this stop-
criterion on its own carries high risks.
4 . The Rates of Defect Detection for a Certain Time Period Have Fallen Below a
Specified Level.

The manager can use graphs that plot the number of defects detected per unit time. A graph
such as Figure 9.5, augmented with the severity level of the defects found, is useful. When
the rate of detection of defects of a severity rating under some specified threshold value falls
below that rate threshold, testing can be stopped. For example, a stop-test criterion could be
stated as: ―We stop testing when we find 5 defects or less, with impact equal to, or below
severity level 3, per week.‖ Selecting a defect detection rate threshold can be based on data
from past projects.

5 . Fault Seeding Ratios Are Favorable.


Fault (defect) seeding is an interesting technique first proposed by Mills [10]. The technique
is based on intentionally inserting a known set of defects into a program. This provides
support for a stop-test decision. It is assumed that the inserted set of defects are typical
defects; that is, they are of the same type, occur at the same frequency, and have the same
impact as the actual defects in the code. One way of selecting such a set of defects is to use
historical defect data from past releases or similar projects
The technique works as follow. Several members of the test team insert (or seed) the code
under test with a known set of defects. The other members of the team test the code to try to
reveal as many of the defects as possible. The number of undetected seeded defects gives an
indication of the number of total defects remaining in the code (seeded plus actual). A ratio
can be set up as follows:

Detected seeded defects = Detected actual defects Total seeded defects Total actual defects

Using this ratio we can say, for example, if the code was seeded with 100 defects and 50 have
been found by the test team, it is likely that 50% of the actual defects still remain and the
testing effort should continue. When all the seeded defects are found the manager has some
confidence that the test efforts have been completed.

b) Narrate about the metrics/parameters to be considered for evaluating the software quality.

In Software Engineering, Software Measurement is done based on some Software


Metrics where these software metrics are referred to as the measure of various
characteristics of a Software.
In Software engineering Software Quality Assurance (SAQ) assures the quality of the
software. Set of activities in SAQ are continuously applied throughout the software
process. Software Quality is measured based on some software quality metrics.
There is a number of metrics available based on which software quality is measured. But
among them, there are few most useful metrics which are most essential in software quality
measurement. They are –
1. Code Quality
2. Reliability
3. Performance
4. Usability
5. Correctness
6. Maintainability
7. Integrity
8. Security
Now let’s understand each quality metric in detail –
1. Code Quality – Code quality metrics measure the quality of code used for the software
project development. Maintaining the software code quality by writing Bug-free and
semantically correct code is very important for good software project development. In code
quality both Quantitative metrics like the number of lines, complexity, functions, rate of
bugs generation, etc , and Qualitative metrics like readability, code clarity, efficiency,
maintainability, etc are measured.
2. Reliability – Reliability metrics express the reliability of software in different
conditions. The software is able to provide exact service at the right time or not is checked.
Reliability can be checked using Mean Time Between Failure (MTBF) and Mean Time To
Repair (MTTR).
3. Performance – Performance metrics are used to measure the performance of the
software. Each software has been developed for some specific purposes. Performance
metrics measure the performance of the software by determining whether the software is
fulfilling the user requirements or not, by analyzing how much time and resource it is
utilizing for providing the service.
4. Usability – Usability metrics check whether the program is user-friendly or not. Each
software is used by the end-user. So it is important to measure that the end-user is happy or
not by using this software.
5. Correctness – Correctness is one of the important software quality metrics as this checks
whether the system or software is working correctly without any error by satisfying the
user. Correctness gives the degree of service each function provides as per developed.
6. Maintainability – Each software product requires maintenance and up-gradation.
Maintenance is an expensive and time-consuming process. So if the software product
provides easy maintainability then we can say software quality is up to mark.
Maintainability metrics include time requires to adapt to new features/functionality, Mean
Time to Change (MTTC), performance in changing environments, etc.
7. Integrity – Software integrity is important in terms of how much it is easy to integrate
with other required software’s which increases software functionality and what is the
control on integration from unauthorized software’s which increases the chances of
cyberattacks.
8. Security – Security metrics measure how much secure the software is? In the age of
cyber terrorism, security is the most essential part of every software. Security assures that
there are no unauthorized changes, no fear of cyber attacks, etc when the software product
is in use by the end-user.

You might also like