ST Answers QB
ST Answers QB
UNIT – 1
PART – A
FAULT:
A fault (defect) is introduced into the software as the result of an error. It
is an anomaly in the software that may cause it to behave incorrectly, and not
according to its specification.
FAILURE:
A failure is the inability of a software system or component to perform its
required functions within specified performance requirements.
SOURCES OF DEFECTS:
Lack of Education
Poor Communication
Oversight
Transcription
Immature Process
VERIFICATION VALIDATION
Validation is a dynamic mechanism
Verification is a static practice of of validating and testing the actual
verifying documents, design, code product.
and program.
It does not involve executing the code It always involves executing the code
It can catch errors that validation It can catch errors that verification
cannot catch. It is low level exercise. cannot catch. It is High Level
Exercise
Defects are assigned to four major classes reflecting their point of origin in
the software life cycle- the development phases in which they were injected.
These classes are:-
Requirements\ Specifications
Design
Code
Testing
TEST ORACLE:
A test oracle is a document, or piece of software that allows testers to
determine whether a test has been passed or failed.
TEST BED:
A test bed is an environment that contains all the hardware and software
needed to test a software components or a software system
14. Why test cases should be developed for both valid and invalid inputs?
A tester must not assume that the software under test will always be
provided with valid inputs. Inputs may be incorrect for several reasons. For
example, software users may have misunderstandings, or lack information about
the nature of the inputs.
PART – B
Testing principles are important to test specialists because they provide the
foundation for developing testing knowledge and acquiring testing skills.
They also provide guidance for defining testing activities as performed in the
practice of a test specialist, A principle can be defined as;
A general or fundamental, law, doctrine, or assumption,
A rule or code for conduct,
The laws or facts of nature underlying the working of an artificial
device.
The principles as stated below only related to execution-based testing.
Principle1:
Testing is the process of exercising a software component using a
selected set of tests cases, with the internet.
Revealing defects, and
Evaluating quality.
Software engineers have made great progress in developing methods to
prevent and eliminate defects. However, defects do occur, and they have
a negative impact on a software quality. This principle supports testing as
an execution-based activity to detect defects.
The term defect as used in this and in subsequent principle represents any
deviations in the software that have negative impact on its functionality,
performance, reliability, security and other of its specified quality
attributes.
Principle-2:
When the test objectives is to detect defects, then a good test case is
one that has a high probability of revealing a yet undetected defects.
The goal for the test is to prove / disprove the hypothesis that is,
determine if the specific defect is present / absent.
A tester can justify the expenditure of the resources by careful test design
so that principle two is supported.
Principle-3:
Test result should be inspected meticulously.
Example:
A failure may be overloaded, and the test may be granted a pass status
when in reality the software has failed the test. Testing may continue based on
erroneous test result. The defect may be revealed at some later stage of testing,
but in that case it may be make costly and difficult to locate and repair.
Principle-4:
A test case must contain the expected output or result.
Example:
A specific variable value must be observed or a certain panel button that
must light up.
Principle-5:
Test cases should be developed for both valid and invalid input
conditions.
The tester must not assume that the software under test will always be
provided with valid inputs.
Inputs may be incorrect for several reasons.
Example:
Software users may have misunderstandings, or lack information about
the nature of the inputs. They often make typographical errors even when
compute / correct information are available. Device may also provide invalid
inputs due to erroneous conditions and malfunctions.
Principle-6:
The probability of the existence of additional defects in a software
component is proportional to the number of defects already defected in
that component.
Example:
If there are two components A and B and testers have found 20 defects in
A and 3 defects in B, then the probability of the existence of additional defects
in A is higher than B.
Principle-7:
Testing should be carried out by a group that is independent of the
development group.
Tester must realize that
1. Developers have a great deal of pride in their work and
2. On practical level it may be difficult for them to conceptualize
where defects could be found.
Principle-8:
Tests must be repeatable and reusable
This principle calls for experiments in the testing domain to require
recording of the exact condition of the test, any special events that occurred,
equipment used, and a careful accounting of the results.
This information invaluable to the developers when the code is
returned for debugging so that they can duplicate test conditions.
Principle-9:
Testing should be planned.
Test plan should be developed for each level of testing, and objective
for each level should be described in the associated plan.
The objectives should be stated as quantitatively as possible plan, with
their precisely specified objectives.
Principle-10:
Testing activities should be integrated into the software life cycle.
It is no longer feasible to postpone testing activities until after the code
has been written.
Test planning activities into the software lifecycle starting as early as
in the requirements analysis phases, and continue on throughout the software
lifecycle in parallel with development activities.
Principle-11:
Testing is a creative and challenging task.
Cost of Defect
Organization incurs extra expenses for
Performing a wrong design based on the wrong requirements;
Transforming the wrong design into wrong code during the coding phase
Testing to make sure the product complies with the (wrong requirements
Releasing the product with the wrong functionality
The cost of building a product and the number of defects in it increases steeply
with the number of defects allowed to seep into the layer phases
House1 :
House2 :
Findings :- no evidence of bugs. e no signs of an infestation.
Maybe you find a few dead bugs or old nests but you see nothing that tells you
that live bugs exist.
Conclusion : your search you didn’t find any live bugs. Unless you completely
dismantled the house down to the foundation, you can’t be sure that you didn’t
simply just miss them.
Software testing works exactly as the exterminator does. It can show that
bugs exist, but it can’t show that bugs don’t exist. You can perform your tests,
find and report bugs, but at no point can you guarantee that there are no longer
any bugs to find.
4. The More Bugs You Find, the More Bugs There Are
Reasons
Programmers have bad days. Like all of us, programmers can have off
days. Code written one day may be perfect; code written another may be sloppy.
Programmers often make the same mistake. Everyone has habits. A programmer
who is prone to a certain error will often repeat it.
Some bugs are really just the tip of the iceberg. Very often the software’s design
or architecture has a fundamental problem. A tester will find several bugs that at
first may seem unrelated but eventually are discovered to have one primary
serious cause.
The term defect and its relationship to the terms error and failure in the
context of the software development domain
1.Education:
The software engineer did not have the proper educational background to
prepare the software artifact. She did not understand how to do something.
For example,
a software engineer who did not understand the precedence order of
operators in a particular programming language could inject a defect in an
equation that uses the operators for a calculation.
2. Communication:
The software engineer was not informed about something by a colleague.
For example,
if engineer 1 and engineer 2 are working on interfacing modules, and
engineer 1 does not inform engineer 2 that a no error checking code will appear
in the interfacing module he is developing, engineer 2 might make an incorrect
assumption relating to the presence/absence of an
error check, and a defect will result.
3. Oversight:
The software engineer omitted to do something. For example, a software
engineer might omit an initialization statement.
4. Transcription:
The software engineer knows what to do, but makes a mistake in doing it.
A simple example is a variable name being misspelled when entering the code.
5. Process:
The process used by the software engineer misdirected her actions.
For example,
A development process that did not allow sufficient time for a detailed
specification to be developed and reviewed could lead to specification defects.
(b) Explain the various origins of defects. Explain the major classes of
defects in the software artifacts
A successful test will reveal the problem and the doctor can begin
treatment. Completing the analogy of doctor and ill patient, one could view
defective software as the ill patient. Testers as doctors need to have knowledge
about possible defects (illnesses) in order to develop defect hypotheses. They
use the hypotheses to:
design test cases;
design test procedures;
assemble test sets;
select the testing levels (unit, integration, etc.) appropriate
for the tests;
evaluate the results of the tests.
6. Short notes on
(a) Precision and accuracy.
VERIFICATION VALIDATION
Validation is a dynamic mechanism
Verification is a static practice of of validating and testing the actual
verifying documents, design, code product.
and program.
It does not involve executing the code It always involves executing the code
It can catch errors that validation It can catch errors that verification
cannot catch. It is low level exercise. cannot catch. It is High Level
Exercise
All the software process improvement models that have had wide
acceptance in industry are high-level models, in the sense that they focus on the
software process as a whole and do not offer adequate support to evaluate and
improve specific software development sub processes such as design and testing.
In spite of its vital role in the production of quality software, existing
process evaluation and improvement models such as the CMM, Bootstrap, and
ISO-9000 have not adequately addressed testing process issues. The Testing
Maturity Model (TMM), has been developed at the Illinois Institute of
Technology by a research group, to address deficiencies these areas.
Human errors can cause a defect or failure at any stage of the software
development lifecycle. The results are classified as trivial or catastrophic,
depending on the consequences of the error.
The requirement of rigorous testing and their associated documentation during
the software development life cycle arises because of the below reasons:
To identify defects
To reduce flaws in the component or system
Increase the overall quality of the system
There can also be a requirement to perform software testing to comply with
legal requirements or industry-specific standards. These standards and rules can
specify what kind of techniques should we use for product development. For
example, the motor, avionics, medical, and pharmaceutical industries, etc., all
have standards covering the testing of the product.
The points below shows the significance of testing for a reliable and easy to use
software product:
The testing is important since it discovers defects/bugs before the
delivery to the client, which guarantees the quality of the software.
It makes the software more reliable and easy to use.
Thoroughly tested software ensures reliable and high-performance
software operation.
Testers need to carefully inspect and interpret test results. Several erroneous
and costly scenarios may occur if care is not taken.
Example:
A failure may be overlooked, and the test may be granted a ―pass status
when in reality the software has failed the test. Testing may continue based on
erroneous test results. The defect may be revealed at some later stage of testing,
but in that case it may be more costly and difficult to locate and repair. A failure
may be suspected when in reality none exists. In this case the test may be
granted a ―fail status. Much time and effort may be spent on trying to find the
defect that does not exist. A careful re-examination of the test results could
finally indicate that no failure has occurred.
9. Give an Overview of the Testing Maturity Model (TMM) & the test
related activities that should be done for V-model architecture.
The internal structure of the TMM is rich in testing practices that can be
learned and applied in a systematic way to support a quality testing process that
improves in incremental steps. There are five levels in the TMM that prescribe a
maturity hierarchy and an evolutionary path to test
process improvement.
Each level with the exception of level 1 has a structure that consists of the
following:
A set of maturity goals. The maturity goals identify testing improvement
goals that must be addressed in order to achieve maturity at that level. To
be placed at a level, an organization must satisfy the maturity goals at
that level. The TMM levels and associated maturity goals.
Supporting maturity sub goals. They define the scope, boundaries and
needed accomplishments for a particular level.
Activities, tasks and responsibilities (ATR). The ATRs address
implementation and organizational adaptation issues at each TMM level.
Supporting activities and tasks are identified, and responsibilities are
assigned to appropriate groups.
Level 1—Initial: (No maturity goals)
At TMM level 1, testing is a chaotic process; it is ill-defined, and not
distinguished from debugging. A documented set of specifications for software
behavior often does not exist. Tests are developed in an ad hoc way after coding
is completed. Testing and debugging are interleaved to get the bugs out of the
software
Level 3—Integration:
Extension of V- model
The various factors, which influence the software, are termed as software
factors. They can be broadly divided into two categories. The first category of
the factors is of those that can be measured directly such as the number of
logical errors, and the second category clubs those factors which can be
measured only indirectly.
For example,
Maintainability but each of the factors is to be measured to check for the
content and the quality control. Several models of software quality factors and
their categorization have been suggested over the years.
Quality relates to the degree to which a system, system component, or process
meets
specified requirements.
customer or user needs, or expectations.
We can measure the degree to which the software possess a given quality
attribute with quality metrics.
A metric is a quantitative measure of the degree to which a system,
system component, or process possesses a given attribute
A quality metric is a quantitative measurement of the degree to
which an item possesses a given quality attribute
Correctness
These requirements deal with the correctness of the output of the software
system. They include
Output mission
The required accuracy of output that can be negatively affected by
inaccurate data or inaccurate calculations.
The completeness of the output information, which can be affected by
incomplete data.
The up-to-dateness of the information defined as the time between the
event and the response by the software system.
The availability of the information.
The standards for coding and documenting the software system.
Reliability
Reliability requirements deal with service failure. They determine the
maximum allowed failure rate of the software system, and can refer to the entire
system or to one or more of its separate functions.
Integrity
This factor deals with the software system security, that is, to prevent
access to unauthorized persons, also to distinguish between the group of people
to be given read as well as write permit.
Interoperability
Interoperability requirements focus on creating interfaces with other
software systems or with other equipment firmware. For example, the firmware
of the production machinery and testing equipment interfaces with the
production control software.
11(a) Why it is necessary to develop test cases for both valid and invalid
input condition?
Test cases should be developed for both valid and invalid input conditions.
The tester must not assume that the software under test will always be
provided with valid inputs.
Inputs may be incorrect for several reasons.
Use of test cases that are based on invalid inputs is very useful for
revealing defects since they may exercise the code in unexpected ways
and identify unexpected software behavior.
Invalid inputs also help developers and Software Test Engineers to
evaluate the robustness of the software, that is, its ability to recover when
unexpected events occur (in this case an erroneous input).
Example:
Software users may have misunderstandings, or lack information about
the nature of the inputs. They often make typographical errors even when
compute / correct information are available. Device may also provide invalid
inputs due to erroneous conditions and malfunctions.
Principle 5 (Test cases should be developed for both valid and invalid
input conditions) supports the need for the independent test group called for in
Principle 7 (Testing should be carried out by a group that is independent of the
development group.)for the following reason. The developer of a software
component may be biased in the selection of test inputs for the component and
specify only valid inputs in the test cases to demonstrate that the software works
correctly. An independent tester is more apt to select invalid inputs as well.
(b) How important to document a product? How will you test requirement
and design document?
12. Compare and contrast terms errors faults and failures using suitable
examples
ERROR:
MAIN DIFFERENCE:
13. Write the major needs of testing and model of testing in details
14. Explain in detail processing and monitoring of the defects with defect
repository?
PART – C
Test execution
Software testers are responsible for test execution based on testing
milestones.
Control, logic, and sequence defects. These include the loop variable
increment step which is outof the scope of the loop. Note that incorrect loop
condition (i _ 6) is carried over from design and should be counted as a design
defect.
The poor quality of this small program is due to defects injected during several
of the life cycle phases with probable causes ranging from lack of education, a
poor process, to oversight on the part of the designers and developers. Even
though it implements a simple function the program is unusable because of the
nature of the defects it contains. Such software is not acceptable to users; as
testers we must make use of all our static and dynamic testing tools as described
in subsequent chapters to ensure that such poor-quality software is not delivered
to our user/client group. We must work with analysts, designers and code
developers to ensure that quality issues are addressed early the software life
cycle. We must also catalog defects and try to eliminate them by improving
education, training, communication, and process.
4. Give the internal structure of TMM and explain about its maturity goals
at each level.
Level 3—Integration
3. Create the equivalence classes in testing the program for quadratic equation
Solution.
4.Write the two basic testing strategies used to design test cases.
(i).Black box testing
(ii).White box testing
PART-B
1.Explain about the following methods of black box testing with example
(a) Equivalence class partitioning(6)
(b) Boundary value analysis(7)
Boundary values are those that contain the upper and lower limit of a
variable. Assume that, age is a variable of any function, and its
minimum value is 18 and the maximum value is 30, both 18 and 30
will be considered as boundary values.
The basic assumption of boundary value analysis is, the test cases
that are created using boundary values are most likely to cause an
error.
There is 18 and 30 are the boundary values that's why tester pays
more attention to these values, but this doesn't mean that the middle
values like 19, 20, 21, 27, 29 are ignored. Test cases are developed
for each and every value of the range.
2.Write a note on the following
(a) Positive and Negative Testing (6)
(b) Decision Tables.(7)
(a).Postive testing:
Positive testing tries to prove that a given product does what it is
supposed to do . When a test case verifies the requirements of the
product with a set of expected output , it is called positive test case .
the purpose of positive testing is to prove that the product works as
per specification and expectations. A product delivering an error when
it is expected to give an error , is also a part of positive testing.
Positive testing can thus be said to check the product’s
behaviour for positive and negative conditions as stated in the
requirements.
(b).Decision Testing:
Example:
Legend:
T – Correct username/password
F – Wrong username/password
E – Error message is displayed
H – Home screen is displayed
Interpretation:
Case 1 – Username and password both were wrong. The user is shown
an error message.
Case 2 – Username was correct, but the password was wrong. The user
is shown an error message.
Case 3 – Username was wrong, but the password was correct. The user
is shown an error message.
Case 4 – Username and password both were correct, and the user
navigated to homepage
Enter correct username and correct password and click on login, and
the expected result will be the user should be navigated to homepage
Enter wrong username and wrong password and click on login, and the
expected result will be the user should get an error message
Enter correct username and wrong password and click on login, and the
expected result will be the user should get an error message
Enter wrong username and correct password and click on login, and the
expected result will be the user should get an error message
o Software
o Hardware
o Network
o Mobile
Software
Hardware
Mobile
Network
Types of Testing
Description
Documents
It is a high-level document which describes principles, methods
Test policy
testing goals of the organization.
A high-level document which identifies the Test Levels (types) t
Test strategy
project.
A test plan is a complete planning document which contains the
Test plan
resources, schedule, etc. of testing activities.
Requirements
This is a document which connects the requirements to the test c
Traceability Matrix
Test scenario is an item or event of a software system which cou
Test Scenario
more Test cases.
It is a group of input values, execution preconditions, expected e
Test case
and results. It is developed for a Test Scenario.
Test Data Test Data is a data which exists before a test is executed. It used
Defect report is a documented report of any flaw in a Software S
Defect Report
perform its expected function.
Test summary report is a high-level document which summarize
Test summary report
conducted as well as the test result.
The dynamic test cases are used when code works dynamically based on user
input. For example, while using email account, on entering valid email, the
system accepts it but, when you enter invalid email, it throws an error message.
In this technique, the input conditions are assigned with causes and the result of
these input conditions with effects.
The main advantage of cause-effect graph testing is, it reduces the time of test
execution and cost.
This technique aims to reduce the number of test cases but still covers all
necessary test cases with maximum coverage to achieve the desired application
quality.
Causes are:
o C1 - Character in column 1 is A
o C2 - Character in column 1 is B
o C3 - Character in column 2 is digit!
Effects:
We all use the ATMs, when we withdraw money from it, it displays account
details at last. Now we again do another transaction, then it again displays
account details, but the details displayed after the second transaction are
different from the first transaction, but both details are displayed by using the
same function of the ATM. So the same function was used here but each time
the output was different, this is called state transition. In the case of testing of a
software application, this method tests whether the function is following state
transition specifications on entering different inputs.
This applies to those types of applications that provide the specific number of
attempts to access the application such as the login function of an application
which gets locked after the specified number of incorrect attempts. Let's see in
detail, in the login function we use email and password, it gives a specific
number of attempts to access the application, after crossing the maximum
number of attempts it gets locked with an error message.
39.5M
828
S4 Home Page
S5 Error Page
In the above state transition table, we see that state S1 denotes first login
attempt. When the first attempt is invalid, the user will be directed to the
second attempt (state S2). If the second attempt is also invalid, then the user
will be directed to the third attempt (state S3). Now if the third and last attempt
is invalid, then the user will be directed to the error page (state S5).
But if the third attempt is valid, then it will be directed to the homepage (state
S4).
S4 Home Page
S5 Error Page
By using the above state transition table we can perform testing of any software
application. We can make a state transition table by determining desired output,
and then exercise the software system to examine whether it is giving desired
output or not.
5.What approach would you use for testing strategies? Explain in detail.
Show how black box testing is performed in COTS components?
Testing strategies:
In this method, tester selects a function and gives input value to examine its
functionality, and checks whether the function is giving expected output or not.
If the function produces correct output, then it is passed in testing, otherwise
failed. The test team reports the result to the development team and then tests
the next function. After completing testing of all functions if there are severe
problems, then it is given back to the development team for correction.
Test procedure
The test procedure of black box testing is a kind of process in which the tester
has specific knowledge about the software's work, and it develops test cases to
check the accuracy of the software's functionality.
It does not require programming knowledge of the software. All test cases are
designed by considering the input and output of a particular function.A tester
knows about the definite output of a particular input, but not about how the
result is arising. There are various techniques used in black box testing for
testing like decision table technique, boundary value analysis technique, state
transition, All-pair testing, cause-effect graph technique, equivalence
partitioning technique, error guessing technique, use case technique and user
story technique. All these techniques have been explained in detail within the
tutorial.
Test cases
Test cases are created considering the specification of the requirements. These
test cases are generally created from working descriptions of the software
including requirements, design parameters, and other specifications. For the
testing, the test designer selects both positive test scenario by taking valid input
values and adverse test scenario by taking invalid input values to determine the
correct output. Test cases are mainly designed for functional testing but can
also be used for non-functional testing. Test cases are designed by the testing
team, there is not any involvement of the development team of software.
Techniques Used in Black Box Testing
Decision Decision Table Technique is a systematic approach where various input combinations and
Table system behavior are captured in a tabular form. It is appropriate for the functions that have
Technique relationship between two and more than two inputs.
Boundary Boundary Value Technique is used to test boundary values, boundary values are those that
Value and lower limit of a variable. It tests, while entering boundary value whether the software is
Technique output or not.
State State Transition Technique is used to capture the behavior of the software application whe
Transition values are given to the same function. This applies to those types of applications that prov
Technique number of attempts to access the application.
All-pair All-pair testing Technique is used to test all the possible discrete combinations of values. T
Testing method is used for testing the application that uses checkbox input, radio button input, list
Technique
Cause- Cause-Effect Technique underlines the relationship between a given result and all the facto
Effect result.It is based on a collection of requirements.
Technique
Equivalence Equivalence partitioning is a technique of software testing in which input data divided into
Partitioning and invalid values, and it is mandatory that all partitions must exhibit the same behavior.
Technique
Error Error guessing is a technique in which there is no specific method for identifying the error.
Guessing experience of the test analyst, where the tester uses the experience to guess the problema
Technique software.
Use Case Use case Technique used to identify the test cases from the beginning to the end of the sy
Technique usage of the system. By using this technique, the test team creates a test scenario that can
software based on the functionality of each function from start to end.
6.Describe the following (a) State based testing(6) (b) Domain testing(7)
(a).State-based Testing:
• Natural representation with finitestate machines – States
correspond to certain values of the attributes – Transitions
correspond to methods
• FSM can be used as basis for testing – e.g. “drive” the class
through all transitions, and verify the response and the resulting
state
4) Actions that result from a transition (an error message or being given the
cash.)
Domain testing is a kind of software testing process during which the software is
tested by giving a minimum number of inputs and evaluating its proper outputs
and it is specific to a particular domain. In the domain testing, we test the software
by giving the appropriate and inputs and checking for the expected outputs from
the domain perspective.
The Domain testing differs for every specific domain in order that we’d like to own
domain-specific knowledge so as to check a software.
Example:
Consider a Halloween games activities for kids, 6 competitions which are laid out,
and tickets which have given in line with the age and gender input these ticketing
modules to be tested in for the entire functionality of games exhibition.
Based on the scenario, we have six scenarios supported on the age and the
competitions
7.What inference can you make from random testing, requirement based testing and
domain testing explains? (13)
8.Explain the various white box techniques with suitable test cases. (13)
The white box testing contains various tests, which are as follows:
o Path testing
o Loop testing
o Condition testing
o Testing based on the memory perspective
o Test performance of the program
Path testing
In the path testing, we will write the flow graphs and test all independent paths.
Here writing the flow graph implies that flow graphs are representing the flow
of the program and also show how every program is added with one another as
we can see in the below image:
Loop testing
In the loop testing, we will test the loops such as while, for, and do-
while, etc. and also check for ending condition if working correctly
and if the size of the conditions is enough.
Condition testing
In this, we will test all logical conditions for both true and false values; that is,
we will verify for both if and else condition.
For example:
1. if(condition) - true
2. {
3. …..
4. ……
5. ……
6. }
7. else - false
8. {
9. …..
10. ……
11. ……
12. }
The above program will work fine for both the conditions, which means that if
the condition is accurate, and then else should be false and conversely.
o The reuse of code is not there: let us take one example, where we have four
programs of the same application, and the first ten lines of the program are
similar. We can write these ten lines as a discrete function, and it should be
accessible by the above four programs as well. And also, if any bug is there, we
can modify the line of code in the function rather than the entire code.
o The developers use the logic that might be modified. If one programmer
writes code and the file size is up to 250kb, then another programmer could
write a similar code using the different logic, and the file size is up to 100kb.
o The developer declares so many functions and variables that might never be
used in any portion of the code. Therefore, the size of the program will increase.
For example,
1. Int a=15;
2. Int b=20;
3. String S= "Welcome";
4. ….
5. …..
6. …..
7. ….
8. …..
9. Int p=b;
10. Create user()
11. {
12. ……
13. ……
14. ….. 200's line of code
15. }
In the above code, we can see that the integer a has never been called
anywhere in the program, and also the function Create user has never been
called anywhere in the code. Therefore, it leads us to memory consumption.
11 (a) Outline the steps in constructing a control flow graph and computing Cyclomatic
complexity with an example. (6)
A Control Flow Graph (CFG) is the graphical representation of control flow or computation
during the execution of programs or applications. Control flow graphs are mostly used in static
analysis as well as compiler applications, as they can accurately represent the flow inside of a
program unit.
1. If-else:
2. while:
3. do-while:
4. for:
Cyclomatic complexity of a code section is the quantitative measure of the number of linearly
independent paths in it. It is a software metric used to indicate the complexity of a program. It
is computed using the Control Flow Graph of the program. The nodes in the graph indicate the
smallest group of commands of a program, and a directed edge in it connects the two nodes
i.e. if second command might immediately follow the first command.
For example, if source code contains no control flow statement then its cyclomatic complexity
will be 1 and source code contains a single path in it. Similarly, if the source code contains
one if condition then cyclomatic complexity will be 2 because there will be two paths one for
true and the other for false.
State Transition Testing is a type of software testing which is performed to check the change in
the state of the application under varying input. The condition of input passed is changed and
the change in state is observed.
State Transition Testing is basically a black box testing technique that is carried out to observe
the behavior of the system or application for different input conditions passed in a sequence.
In this type of testing, both positive and negative input values are provided and the behavior
of the system is observed.
State Transition Testing is basically used where different system transitions are needed to be
tested.
Objectives of State Transition Testing:
The objective of State Transition testing is:
Transition States:
Change Mode:
When this mode is activated then the display mode moves from TIME to DATE.
Reset:
When the display mode is TIME or DATE, then reset mode sets them to ALTER TIME or ALTER
DATE respectively.
Time Set:
When this mode is activated, display mode changes from ALTER TIME to TIME.
Date Set:
When this mode is activated, display mode changes from ALTER DATE to DATE.
States
Transition
Events
Actions
Code Coverage :
Code coverage is a software testing metric or also termed as a Code Coverage Testing which
helps in determining how much code of the source is tested which helps in accessing quality of
test suite and analyzing how comprehensively a software is verified. Actually in simple code
coverage refers to the degree of which the source code of the software code has been tested.
This Code Coverage is considered as one of the form of white box testing.
As we know at last of the development each client wants a quality software product as well as
the developer team is also responsible for delivering a quality software product to the
customer/client. Where this quality refers to the product’s performance, functionalities,
behavior, correctness, reliability, effectiveness, security, and maintainability. Where Code
Coverage metric helps in determining the performance and quality aspects of any software.
3. Function coverage :
The number of functions that are called and executed at least once in the source code.
Mutation Testing is a type of software testing in which certain statements of the source code
are changed/mutated to check if the test cases are able to find errors in source code. The goal
of Mutation Testing is ensuring the quality of test cases in terms of robustness that it should
fail the mutated source code.
The changes made in the mutant program should be kept extremely small that it does not
affect the overall objective of the program. Mutation Testing is also called Fault-based testing
strategy as it involves creating a fault in the program and it is a type of White Box
Testing which is mainly used for Unit Testing.
1. Statement Mutation – developer cut and pastes a part of a code of which the
outcome may be a removal of some lines
2. Value Mutation– values of primary parameters are modified
3. Decision Mutation– control statements are to be changed
Mutation testing is extremely costly and time-consuming since there are many mutant
programs that need to be generated.
Since its time consuming, it’s fair to say that this testing cannot be done without an
automation tool.
Each mutation will have the same number of test cases than that of the original
program. So, a large number of mutant programs may need to be tested against the
original test suite.
As this method involves source code changes, it is not at all applicable for Black Box
Testing.
13 Explain the significance of Control flow graph & Cyclomatic complexity in white box
testing with a pseudo code for sum of positive numbers. Also mention the independent
paths with test cases.(13)
14 Discuss in detail about static testing and structural testing. Also write the difference
between these testing concepts.(13)
Structural testing is a type of software testing which uses the internal design of the software
for testing or in other words the software testing which is performed by the team which knows
the development phase of the software, is known as structural testing.
Structural testing is basically related to the internal design and implementation of the software
i.e. it involves the development team members in the testing team. It basically tests different
aspects of the software according to its types. Structural testing is just the opposite of
behavioral testing.
Static Testing is a type of a Software Testing method which is performed to check the defects
in software without actually executing the code of the software application. Whereas in
Dynamic Testing checks the code is executed to detect the defects.
Static testing is performed in early stage of development to avoid errors as it is easier to find
sources of failures and it can be fixed easily. The errors that can’t not be found using Dynamic
Testing, can be easily found by Static Testing.
Static Testing Techniques:
There are mainly two type techniques used in Static Testing:
1. Review:
In static testing review is a process or technique that is performed to find the potential defects
in the design of the software. It is process to detect and remove errors and defects in the
different supporting documents like software requirements specifications. People examine the
documents and sorted out errors, redundancies and ambiguities.
Review is of four types:
Informal:
In informal review the creator of the documents put the contents in front of
audience and everyone gives their opinion and thus defects are identified in the
early stage.
Walkthrough:
It is basically performed by experienced person or expert to check the defects so
that there might not be problem further in the development or testing phase.
Peer review:
Peer review means checking documents of one-another to detect and fix the
defects. It is basically done in a team of colleagues.
Inspection:
Inspection is basically the verification of document the higher authority like the
verification of software requirement specifications (SRS).
2. Static Analysis:
Static Analysis includes the evaluation of the code quality that is written by developers.
Different tools are used to do the analysis of the code and comparison of the same with the
standard.
It also helps in following identification of following defects:
(a) Unused variables
(b) Dead code
(c) Infinite loops
(d) Variable with undefined value
(e) Wrong syntax
Static Analysis is of three types:
Data Flow:
Data flow is related to the stream processing.
Control Flow:
Control flow is basically how the statements or instructions are executed.
Cyclomatic Complexity:
Cyclomatic complexity is the measurement of the complexity of the program that
is basically related to the number of independent paths in the control flow graph
of the program.
PART-C
1.Demonstrate the various black box test cases using Equivalence class partitioning and
boundary values analysis to test a module for payroll System. (15)
2.Explain how the covering code logic and paths are used in the role of white box design with
suitable example. (15)
White box testing is also known as glass box testing, structural testing, clear box
testing, open box testing and transparent box testing.
It tests internal coding and infrastructure of a software focus on checking of
predefined inputs against expected and desired outputs. It is based on inner workings
of an application and revolves around internal structure testing.
In this type of testing programming skills are required to design test cases. The
primary goal of white box testing is to focus on the flow of inputs and outputs through
the software and strengthening the security of the software.
Code Coverage:
This is an important unit testing metric.
In simple terms, the extent to which the source code of a software program or an
application will get executed during testing is what is termed as Code Coverage.
If the tests execute the entire piece of code including all branches, conditions, or loops,
then we would say that there is complete coverage of all the possible scenarios and
thus the Code Coverage is 100%. To understand this even better, let’s take up an
example.
Given below is a simple code that is used to add two numbers and display the result
depending on the value of the result.
Input a, b
Let c = a + b
The above program takes in two inputs i.e. ‘a’ & ‘b’. The sum of both is stored in
variable c. If the value of c is less than 10, then the value of ‘c’ is printed else ‘Sorry’ is
printed.
Now, if we have some tests to validate the above program with the values of a & b such
that the sum is always less than 10, then the else part of the code never gets executed.
In such a scenario, we would say that the coverage is not complete.
Various reasons make Code Coverage essential and some of those are
listed below:
It helps to ascertain that the software has lesser bugs when compared to the software
that does not have a good Code Coverage.
By aiding in improving the code quality, it indirectly helps in delivering a better ‘quality’
software.
It is a measure that can be used to know the test effectiveness (effectiveness of the unit
tests that are written to test the code).
Helps to identify those parts of the source code that would go untested.
It helps to determine if the current testing (unit testing) is sufficient or not and if some
more tests are needed in place as well.
Path testing:
In the path testing, we will write the flow graphs and test all independent paths.
And test all the independent paths implies that suppose a path from main() to function G, first
set the parameters and test if the program is correct in that particular path, and in the same
way test all other paths and fix the bugs.
Here we will take a simple example, to get a better idea what is basis path testing include
In the above example, we can see there are few conditional statements that is executed
depending on what condition it suffice. Here there are 3 paths or condition that need to be
tested to get the output,
Path 1: 1,2,3,5,6, 7
Path 2: 1,2,4,5,6, 7
Path 3: 1, 6, 7
3.Demonstrate the various black box test cases using Equivalence class partitioning and
boundary values analysis to test a module for ATM system. (15)
Boundary Testing:
Boundary testing is the process of testing between extreme ends or boundaries
between partitions of the input values.
So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just
Inside-Just Outside values are called boundary values and the testing is called
“boundary testing”.
The basic idea in normal boundary value testing is to select input variable
values at their:
1. Minimum
2. Just above the minimum
3. A nominal value
4. Just below the maximum
5. Maximum
Example:
Imagine, there is a function that accepts a number between 18 to 30, where 18
is the minimum and 30 is the maximum value of valid partition, the other values of this
partition are 19, 20, 21, 22, 23, 24, 25, 26, 27, 28 and 29. The invalid partition consists
of the numbers which are less than 18 such as 12, 14, 15, 16 and 17, and more than 30
such as 31, 32, 34, 36 and 40. Tester develops test cases for both valid and invalid
partitions to capture the behavior of the system on different input conditions.
The software system will be passed in the test if it accepts a valid number and gives
the desired output, if it is not, then it is unsuccessful. In another scenario, the software
system should not accept invalid numbers, and if the entered number is invalid, then it
should display error massage.
If the software which is under test, follows all the testing guidelines and specifications
then it is sent to the releasing team otherwise to the development team to fix the
defects.
Equivalence Partitioning
It divides the input data of software into different equivalence data classes.
You can apply this technique, where there is a range in the input field.
ATM Simulation System z Specification (simplified) y The customer will be required to
enter the account number and a PIN. x There is no need of an ATM card. y The ATM
must provide the following transactions to the customer: x Cash Withdrawals,
Deposits, Tranfers and Balance Inquiries. x Only one transaction will be allowed in each
session. y The ATM will communicate the transaction to the bank and obtain
verification that it was allowed by the bank. x If the bank determines that account
number or PIN is invalid, the transaction is canceled. y The ATM will have an operator
panel that will allow an operator to start and stop the servicing of customers. x When
the machine is shut down, the operator may remove deposit envelopes and reload the
machine with cash. x The operator will be required to enter the total cash on hand
before starting the system from this panel.
4.Explain the basis path testing. State the principles of control flow graph and cyclomatic
complexity. What are the formulas used in cyclomatic complexity? (15)
The flow graph is similar to the earlier flowchart, with which it is not to be
confused.
Flow Graph Elements:A flow graph contains four different types of elements.
(1) Process Block (2) Decisions (3) Junctions (4) Case Statements
Method 2: The Cyclomatic complexity, V (G) for a flow graph G can be defined as
V (G) = E - N + 2
Where: E is total number of edges in the flow graph. N is the total number of nodes in
the flow graph.
Method 3: The Cyclomatic complexity V (G) for a flow graph G can be defined as
V (G) = P + 1
Example
Consider the code snippet below, for which we will conduct basis path testing:
int num1 = 6;
int num2 = 9;
if(num2 == 0){
cout<<"num1/num2 is undefined"<<endl;
}else{
if(num1 > num2){
cout<<"num1 is greater"<<endl;
}else{
cout<<"num2 is greater"<<endl;
}
}
The cyclomatic complexity of the control flow graph above will be:
where,
Path 1: 1A-2B-3C-4D-5F-9
Path 2: 1A-2B-3C-4E-6G-7I-9
Path 3: 1A-2B-3C-4E-6H-8J-9
Testing Debugging
Testing is the process to find bugs and Debugging is the process to correct the
errors. bugs found during testing.
It is the process to identify the failure It is the process to give the absolution to
of implemented code. code failure.
14. Why test cases should be developed for both valid and invalid inputs?
Test cases should be developed for both valid and invalid input conditions. Use of test
cases that are based on invalid inputs is very useful for revealing defects since they may
exercise the code in unexpected ways and identify unexpected software behavior.
Test cases should be developed for both valid and in valid input conditions. Principle 6.
The probability of the existence of additional defects in a software component is
proportional to the number of defects already detected in that component.
5. Differentiate alpha testing from beta testing and discuss in detail about the phases in
which alpha and beta testing is done, In what way it is related to milestone and deliverable.
6. Summarize the issues that arise in class testing and explain about compatibility and
documentation testing.
7. Determine and prepare the test cases for acceptance, usability and accessibility testing.
Acceptance Testing is a method of software testing where a system is tested for
acceptability. The major aim of this test is to evaluate the compliance of the system with the
business requirements and assess whether it is acceptable for delivery or not.
Usability Testing is a testing technique used to evaluate how easily the user can use the software.
In simple words, it checks the user-friendliness of the software. It is also called UX testing (user
experience testing) because it observes the experience a user has while interacting with a
software.
Usually, the customer performs these usability testing and the organization which creates these
software collects feedback and metrics from these tests and makes changes in the software
application.
o This testing will provide the free edition of software applications or other content to
multiple locations.
o This testing ensures that the application could be used in various languages without need
of rewriting the entire software code.
o It will increase the code design and quality of the product.
o It will enhance the customer base around the world.
o This testing will help us to decrease the cost and time for localization testing.
o This testing will provide us more scalability and flexibility.
There are many different testing levels which help to check behavior and performance for
software testing. These testing levels are designed to recognize missing areas and reconciliation
between the development lifecycle states. In SDLC models there are characterized phases such
as requirement gathering, analysis, design, coding or execution, testing, and deployment. All
these phases go through the process of software testing levels.
Levels of Testing
Each of these testing levels has a specific purpose. These testing level provide value to the
software development lifecycle.
1) Unit testing:
A Unit is a smallest testable portion of system or application which can be compiled, liked,
loaded, and executed. This kind of testing helps to test each module separately.
The aim is to test each part of the software by separating it. It checks that component are
fulfilling functionalities or not. This kind of testing is performed by developers.
2) Integration testing:
Integration means combining. For Example, In this testing phase, different software modules are
combined and tested as a group to make sure that integrated system is ready for system testing.
Integrating testing checks the data flow from one module to other modules. This kind of testing
is performed by testers.
3) System testing:
System testing is performed on a complete, integrated system. It allows checking system’s
compliance as per the requirements. It tests the overall interaction of components. It involves
load, performance, reliability and security testing.
System testing most often the final test to verify that the system meets the specification. It
evaluates both functional and non-functional need for the testing.
4) Acceptance testing:
Regression Testing
Buddy Testing
Alpha Testing
Beta Testing
10. How would you classify integration testing and system testing?
System Testing:
While developing a software or application product, it is tested at the final stage as a whole by
combining all the product modules and this is called as System Testing. The primary aim of
conducting this test is that it must fulfill the customer/user requirement specification. It is also
called as an end-to-end test, as is performed at the end of the development. This testing does
not depend on system implementation; in simple words, the system tester doesn’t know which
technique among procedural and object-oriented is implemented.
This testing is classified into functional and non-functional requirements of the system. In
functional testing, the testing is similar to black-box testing which is based on specifications
instead of code and syntax of the programming language used. On the other hand, in non-
functional testing, it checks for performance and reliability through generating test cases in the
corresponding programming language.
Integration Testing:
This testing is the collection of the modules of the software, where the relationship and the
interfaces between the different components are also tested. It needs coordination between the
project level activities of integrating the constituent components together at a time.
The integration and integration testing must adhere to a building plan for the defined
integration and identification of the bug in the early stages. However, an integrator or
integration tester must have the programming knowledge, unlike system tester.
Difference between System Testing and Integration Testing :
Scenario Testing
Scenario testing is a software testing technique that makes best use of scenarios. Scenarios help
a complex system to test better where in the scenarios are to be credible which are easy to
evaluate.
System scenarios
Use-case and role-based scenarios
Performance Testing
Load testing - It is the simplest form of testing conducted to understand the behaviour of
the system under a specific load. Load testing will result in measuring important
business critical transactions and load on the database, application server, etc., are also
monitored.
Stress testing - It is performed to find the upper limit capacity of the system and also to
determine how the system performs if the current load goes well above the expected
maximum.
Soak testing - Soak Testing also known as endurance testing, is performed to determine
the system parameters under continuous expected load. During soak tests the parameters
such as memory utilization is monitored to detect memory leaks or other performance
issues. The main aim is to discover the system's performance under sustained use.
Spike testing - Spike testing is performed by increasing the number of users suddenly by
a very large amount and measuring the performance of the system. The main aim is to
determine whether the system will be able to sustain the workload.
Performance Testing Process:
Speed
Scalability
Stability
Reliability
12 (a) Why is it so important to design a test harness for reusability and show the approach
you used for running the unit test and recording the results?
Test Harness, also known as automated test framework mostly used by developers. A test
harness provides stubs and drivers, which will be used to replicate the missing items, which are
small programs that interact with the software under test.
To execute a set of tests within the framework or using the test harness
To key in inputs to the application under test
Provide a flexibility and support for debugging
To capture outputs generated by the software under test
To record the test results(pass/fail) for each one of the tests
Helps the developers to measure code coverage at code level.
Test Harness Benefits:
Adding new data and function is not Adding new data and function is
easy. easy.
Reduces Defects in the Newly developed features or reduces bugs when changing the
existing functionality.
Reduces Cost of Testing as defects are captured in very early phase.
Improves design and allows better refactoring of code.
Unit Tests, when integrated with build gives the quality of the build as well.
Black Box Testing - Using which the user interface, input and output are tested.
White Box Testing - used to test each one of those functions behaviour is tested.
Gray Box Testing - Used to execute tests, risks and assessment methods.
14 (a) Explain about the various types of System Testing and its importance with example.
System Testing is a type of software testing that is performed on a complete integrated system
to evaluate the compliance of the system with the corresponding requirements.
In system testing, integration testing passed components are taken as input. The goal of
integration testing is to detect any irregularity between the units that are integrated together.
System testing detects defects within both the integrated units and the whole system. The result
of system testing is the observed behavior of a component or a system when it is tested.
System Testing is carried out on the whole system in the context of either system requirement
specifications or functional requirement specifications or in the context of both. System testing
tests the design and behavior of the system and also the expectations of the customer. It is
performed to test the system beyond the bounds mentioned in the software requirements
specification (SRS).
System Testing is basically performed by a testing team that is independent of the development
team that helps to test the quality of the system impartial. It has both functional and non-
functional testing.
System Testing is a black-box testing.
System Testing is performed after the integration testing and before the acceptance testing.
System Testing Process:
System Testing is performed in the following steps:
Test Environment Setup:
Create testing environment for the better quality testing.
Create Test Case:
Generate test case for the testing process.
Create Test Data:
Generate the data that is to be tested.
Execute Test Case:
After the generation of the test case and the test data, test cases are executed.
Defect Reporting:
Defects in the system are detected.
Regression Testing:
It is carried out to test the side effects of the testing process.
Log Defects:
Defects are fixed in this step.
Retest:
If the test is not successful then again test is performed.
(b) What is regression testing? Outline the issues to be addressed for developing test cases
to perform regression testing.
Regression Testing is defined as a type of software testing to confirm that a recent program or
code change has not adversely affected existing features. Regression Testing is nothing but a full
or partial selection of already executed test cases which are re-executed to ensure existing
functionalities work fine.
This testing is done to make sure that new code changes should not have side effects on the
existing functionalities. It ensures that the old code still works once the latest code changes are
done.
The Need of Regression Testing mainly arises whenever there is requirement to change the
code and we need to test whether the modified code affects the other part of software application
or not. Moreover, regression testing is needed, when a new feature is added to the software
application and for defect fixing as well as performance issue fixing.
In order to do Regression Testing process, we need to first debug the code to identify the bugs.
Once the bugs are identified, required changes are made to fix it, then the regression testing is
done by selecting relevant test cases from the test suite that covers both modified and affected
parts of the code.
Software maintenance is an activity which includes enhancements, error corrections,
optimization and deletion of existing features. These modifications may cause the system to
work incorrectly. Therefore, Regression Testing becomes necessary. Regression Testing can be
carried out using the following techniques:
Retest All
This is one of the methods for Regression Testing in which all the tests in the existing test
bucket or suite should be re-executed. This is very expensive as it requires huge time and
resources.
Regression Test Selection is a technique in which some selected test cases from test suite are
executed to test whether the modified code affects the software application or not. Test cases are
categorized into two parts, reusable test cases which can be used in further regression cycles and
obsolete test cases which can not be used in succeeding cycles.
Prioritization of Test Cases
Prioritize the test cases depending on business impact, critical & frequently used
functionalities. Selection of test cases based on priority will greatly reduce the regression
test suite.
It was found from industry data that a good number of the defects reported by customers were
due to last minute bug fixes creating side effects and hence selecting the Test Case for regression
testing is an art and not that easy. Effective Regression Tests can be done by selecting the
following test cases –
Following are the major testing problems for doing regression testing:
With successive regression runs, test suites become fairly large. Due to time and budget
constraints, the entire regression test suite cannot be executed
Minimizing the test suite while achieving maximum Test coverage remains a challenge
Determination of frequency of Regression Tests, i.e., after every modification or every
build update or after a bunch of bug fixes, is a challenge.
PART – C
1. (a) Write the importance of security testing and explain the consequences of security
breaches, also write the various areas which have to be focused on during security testing.
Integration tests:
(ii) to assemble the individual units into working subsystems and finally a
complete system that is ready for system test.
2. Case Study: Several kinds of tests for a web application. Abstract: A UK based company
entrusted us to test this project. It’s a web application for government to collect data and
calculate them to prioritize all thetasks. Description: This client is from Hertfordshirts in UK, the
project is an application for the government. In fact it includes two parts: web site for data
collection and presentation purpose, in parallel a windows application for administration purpose.
Here the task is ensuring the quality of the web application, includes many aspects, such as
function correctness performance acceptance, UI appropriateness and so on. Moreover, for
testing function, we had to use the windows application to edit user’s services and other data.
The client only gave us the software requirement specification and the applications tested, there
was not any test plan, test strategy, test cases, even test termination criterion. On the one hand,
we had to spend much time in communicating with client to make clearly about some important
points; on the other hand we had to get familiar with the application via operating it and reading
requirements. Then, how to improve the efficiency of regression test?
3 (a) What is security testing? Explain its importance.
Security testing is an integral part of software testing, which is used to discover the weaknesses,
risks, or threats in the software application and also help us to stop the nasty attack from the
outsiders and make sure the security of our software applications.
The primary objective of security testing is to find all the potential ambiguities and
vulnerabilities of the application so that the software does not stop working. If we perform
security testing, then it helps us to identify all the possible security threats and also help the
programmer to fix those errors.
It is a testing procedure, which is used to define that the data will be safe and also continue the
working process of the software.
Here, we will discuss the following aspects of security testing: Exception Handling in Java -
o Availability
o Integrity
o Authorization
o Confidentiality
o Authentication
o Non-repudiation
As per Open Source Security Testing techniques, we have different types of security testing
which as follows:
o Security Scanning
o Risk Assessment
o Vulnerability Scanning
o Penetration testing
o Security Auditing
o Ethical hacking
o Posture Assessment
Security Scanning
Risk Assessment
To moderate the risk of an application, we will go for risk assessment. In this, we will explore
the security risk, which can be detected in the association. The risk can be further divided into
three parts, and those are high, medium, and low. The primary purpose of the risk assessment
process is to assess the vulnerabilities and control the significant threat.
Vulnerability Scanning
It is an application that is used to determine and generates a list of all the systems which contain
the desktops, servers, laptops, virtual machines, printers, switches, and firewalls related to a
network. The vulnerability scanning can be performed over the automated application and also
identifies those software and systems which have acknowledged the security vulnerabilities.
Penetration testing
professional tries to identify and exploit the weakness in the computer system. The primary
objective of this testing is to simulate outbreaks and also finds the loophole in the system and
similarly save from the intruders who can take the benefits.
Security Auditing
Security auditing is a structured method for evaluating the security measures of the organization.
In this, we will do the inside review of the application and the control system
Ethical hacking
Ethical hacking
is used to discover the weakness in the system and also helps the organization to fix those
security loopholes before the nasty hacker exposes them. The ethical hacking will help us to
increase the security position of the association because sometimes the ethical hackers use the
same tricks, tools, and techniques that nasty hackers will use, but with the approval of the official
person.
The objective of ethical hacking is to enhance security and to protect the systems from malicious
users' attacks.
Posture Assessment
It is a combination of ethical hacking, risk assessments, and security scanning, which helps us
to display the complete security posture of an organization.
At present, web applications are growing day by day, and most of the web application is at risk.
Here we are going to discuss some common weaknesses of the web application.
o Client-side attacks
o Authentication
o Authorization
o Command execution
o Logical attacks
o Information disclosure
Client-side attacks
means that some illegitimate implementation of the external code occurs in the web application.
And the data spoofing actions have occupied the place where the user believes that the particular
data acting on the web application is valid, and it does not come from an external source.
Note: Here, Spoofing is a trick to create duplicate websites or emails.
Authentication
In this, the authentication will cover the outbreaks which aim to the web application methods of
authenticating the user identity where the user account individualities will be stolen. The
incomplete authentication will allow the attacker to access the functionality or sensitive data
without performing the correct authentication.
For example, the brute force attack, the primary purpose of brute force attack, is to gain access
to a web application. Here, the invaders will attempt n-numbers of usernames and password
repeatedly until it gets in because this is the most precise way to block brute-force attacks.
After all, once they try all defined number of an incorrect password, the account will be locked
automatically.
Authorization
The authorization comes in the picture whenever some intruders are trying to retrieve the
sensitive information from the web application illegally.
For example, a perfect example of authorization is directory scanning. Here the directory
scanning is the kind of outbreaks that deeds the defects into the webserver to achieve the illegal
access to the folders and files which are not mentioned in the pubic area.
And once the invaders succeed in getting access, they can download the delicate data and install
the harmful software on the server.
Command execution
The command execution is used when malicious attackers will control the web application.
Logical attacks
The logical attacks are being used when the DoS (denial of service) outbreaks, avoid a web
application from helping regular customer action and also restrict the application usage.
Information disclosure
The information disclosures are used to show the sensitive data to the invaders, which means that
it will cover bouts that planned to obtain precise information about the web application. Here the
information leakage happens when a web application discloses the delicate data, like the error
message or developer comments that might help the attacker for misusing the system.
For example, the password is passing to the server, which means that the password should be
encoded while being communicated over the network.
(b) List the tasks that must be performed by the developer or tested during the preparation
fort unit testing.
In order to do Unit Testing, developers write a section of code to test a specific function in
software application. Developers can also isolate this function to test more rigorously which
reveals unnecessary dependencies between function being tested and other units so the
dependencies can be eliminated. Developers generally use UnitTest framework to develop
automated test cases for unit testing.
Unit Testing is of two types
Manual
Automated
Unit testing is commonly automated but may still be performed manually. Software Engineering
does not favor one over the other but automation is preferred. A manual approach to unit testing
may employ a step-by-step instructional document.
A developer writes a section of code in the application just to test the function. They
would later comment out and finally remove the test code when the application is
deployed.
A developer could also isolate the function to test it more rigorously. This is a more
thorough unit testing practice that involves copy and paste of code to its own testing
environment than its natural environment. Isolating the code helps in revealing
unnecessary dependencies between the code being tested and other units or data
spaces in the product. These dependencies can then be eliminated.
A coder generally uses a UnitTest Framework to develop automated test cases. Using an
automation framework, the developer codes criteria into the test to verify the correctness
of the code. During execution of the test cases, the framework logs failing test cases.
Many frameworks will also automatically flag and report, in summary, these failed test
cases. Depending on the severity of a failure, the framework may halt subsequent testing.
The workflow of Unit Testing is 1) Create Test Cases 2) Review/Rework 3) Baseline 4)
Execute Test Cases.
The Unit Testing Techniques are mainly categorized into three parts which are Black box
testing that involves testing of user interface along with input and output, White box testing that
involves testing the functional behaviour of the software application and Gray box testing that is
used to execute test suites, test methods, test cases and performing risk analysis.
Code coverage techniques used in Unit Testing are listed below:
Statement Coverage
Decision Coverage
Branch Coverage
Condition Coverage
Finite State Machine Coverage
4 (a) Describe the top-down and bottom-up approaches in integration testing discuss about
the merits and limitation of these approaches.
o This testing technique deals with how higher-level modules are tested with lower-level
modules until all the modules have been tested successfully.
o In the top-down method, we will also make sure that the module we are adding is
the child of the previous one, like Child C, is a child of Child B.
o The purpose of executing top-down integration testing is to detect the significant design
flaws and fix them early because required modules are tested first.
Advantages:
Disadvantages:
o This type of testing method deals with how lower-level modules are tested with higher-
level modules until all the modules have been tested successfully.
o In bottom-up testing, the top-level critical modules are tested, at last. Hence it may cause
a defect.
o In simple words, we can say that we will be adding the modules from the bottom to the
top and test the data flow in similar order as we can see in the below image:
In the bottom-up method, we will ensure that the modules we are adding are the parent
of the previous one as we observe in the following image:
Advantages:
Disadvantages:
Critical modules (at the top level of software architecture) which control the flow of
application are tested last and may be prone to defects.
An early prototype is not possible
(b) Suppose you are developing an online system for a specific vendor of the electronic
equipment with all the necessary features to run the Shop. Write down a detailed test plan
by including the necessary components
A Test Plan is a detailed document that describes the test strategy, objectives, schedule,
estimation, deliverables, and resources required to perform testing for a software product. Test
Plan helps us determine the effort needed to validate the quality of the application under test. The
test plan serves as a blueprint to conduct software testing activities as a defined process, which is
minutely monitored and controlled by the test manager.
As per ISTQB definition: “Test Plan is A document describing the scope, approach, resources,
and schedule of intended test activities.”
At this stage, you are convinced that a test plan drives a successful testing process. Now, you
must be thinking ‘How to write a good test plan?’ To create and write a good test plan you can
use a test plan software. Also, We can write a good software test plan by following the below
steps:
The first step towards creating a test plan is to analyze the product, its features and
functionalities to gain a deeper understanding. Further, explore the business requirements and
what the client wants to achieve from the end product. Understand the users and use cases to
develop the ability of testing the product from user’s point of view.
Once you have analyzed the product, you are ready to develop the test strategy for different test
levels. Your test strategy can be composed of several testing techniques. Keeping the use cases
and business requirements in mind, you decide which testing techniques will be used.
For example, if you are building a website which has thousands of online users, you will include
‘Load Testing’ in your test plan. Similarly, if you are working on e-commerce website which
includes online monetary transactions, you will emphasize on security and penetration testing.
3. Define Scope
A good test plan clearly defines the testing scope and its boundaries. You can use requirements
specifications document to identify what is included in the scope and what is excluded. Make a
list of ‘Features to be tested’ and ‘Features not to be tested’. This will make your test plan
specific and useful. You might also need to specify the list of deliverables as output of your
testing process.
The term ‘scope’ applies to functionalities as well as on the testing techniques. You might need
to explicitly define if any testing technique, such as security testing, is out of scope for your
product. Similarly, if you are performing load testing on an application, you need to specify the
limit of maximum and minimum load of users to be tested.
4. Develop a Schedule
With the knowledge of testing strategy and scope in hand, you are able to develop schedule for
testing. Divide the work into testing activities and estimate the required effort. You can also
estimate the required resources for each task. Now, you can include test schedule in your testing
plan which helps you to control the progress of testing process.
A good test plan clearly lists down the roles and responsibilities of testing team and team
manager. The section of ‘Roles and Responsibilities’ along with ‘schedule’ tells everyone what
to do and when to do.
6. Anticipate Risks
Your test plan is incomplete without anticipated risks, mitigation techniques and risk responses.
There are several types of risks in software testing such as schedule, budget, expertise,
knowledge. You need to list down the risks for your product along with the risk responses and
mitigation techniques to lessen their intensity.
Different people may come up with different sections to be included in testing plan. But who will
decide what is the right format? How about using IEEE Standard test plan template to assure that
your test plan meets all the necessary requirements?
Usage of standardized templates will bring more confidence and professionalism to your team.
Let’s have a look at the details to know how you can write a test plan according to IEEE 829
standard. Before that, we need to understand what is IEEE 829 standard?
IEEE has specified eight stages in the documentation process, producing a separate
document for each stage.
According to IEEE 829 test plan standard, following sections goes into creating a testing plan:
As the name suggests, ‘Test Plan Identifier’ uniquely identifies the test plan. It identifies the
project and may include version information. In some cases, companies might follow a
convention for a test plan identifier. Test plan identifier also contains information of the test plan
type. There can be the following types of test plans:
Master Test Plan: A single high level plan for a project or product that combines all
other test plans.
Testing Level Specific Test Plans: A test plan can be created for each level of testing i.e.
unit level, integration level, system level and acceptance level.
Testing Type Specific Test Plans: Plans for major types of testing like Performance
Testing Plan and Security Testing Plan.
Example Test Plan Identifier: ‘Master Test plan for Workshop Module TP_1.0’
2. Introduction
Introduction contains the summary of the testing plan. It sets the objective, scope, goals and
objectives of the test plan. It also contains resource and budget constraints. It will also specify
any constraints and limitations of the test plan.
3. Test items
Test items list the artifacts that will be tested. It can be one or more module of the
project/product along with their version.
4. Features to be tested
In this section, all the features and functionalities to be tested are listed in detail. It shall also
contain references to the requirements specifications documents that contain details of features to
be tested.
This section specifies the features and functionalities that are out of the scope for testing. It shall
contain reasons of why these features will not be tested.
6. Approach
In this section, approach for testing will be defined. It contains details of how testing will be
performed. It contains information of the sources of test data, inputs and outputs, testing
techniques and priorities. The approach will define the guidelines for requirements analysis,
develop scenarios, derive acceptance criteria, construct and execute test cases.
This section describes a success criteria for evaluating the test results. It describes the success
criteria in detail for each functionality to be tested.
It will describe any criteria that may result in suspending the testing activities and subsequently
the requirements to resume the testing process.
9. Test deliverables
Test deliverables are the documents that will be delivered by the testing team at the end of
testing process. This may include test cases, sample data, test report, issue log.
In this section, testing tasks are defined. It will also describe the dependencies between any tasks,
resources required and estimated completion time for tasks. Testing tasks may include creating
test scenarios, creating test cases, creating test scripts, executing test cases, reporting bugs,
creating issue log.
12. Responsibilities
In this section of the test plan, roles and responsibilities are assigned to the testing team.
This section describes the training needs of the staff for carrying out the planned testing activities
successfully.
14. Schedule
The schedule is created by assigning dates to testing activities. This schedule shall be in
agreement with the development schedule to make a realistic test plan.
It is very important to identify the risks, likelihood and impact of risks. Test plan shall also
contain mitigation techniques for the identified risks. Contingencies shall also be included in the
test plan.
16. Approvals
Reducing risks, for bug-free components don't always perform well as a system.
Preventing as many defects and critical bugs as possible by careful examination.
Verifying the conformance of design, features, and performance with the
specifications stated in the product requirements.
7.Outline the need for test metrics &Give any two metrics
Software Testing Metrics are the quantitative measures used to estimate the progress,
quality, productivity and health of the software testing process. The goal of software
testing metrics is to improve the efficiency and effectiveness in the software testing
process and to help make better decisions for further testing process by providing
Test automation is the practice of running tests automatically, managing test data,
and utilizing results to improve software quality. It’s primarily a quality assurance
measure, but its activities involve the commitment of the entire software production
team. From business analysts to developers and DevOps engineers, getting the most out
The difference between a milestone and a deliverable is that a milestone signifies project progress towards
obtaining its end objectives, a stepping stone that must be reached in order to continue, whereas a deliverable is a
11.What is walkthrough?
Walkthrough in software testing is used to review documents with peers, managers, and fellow team
members who are guided by the author of the document to gather feedback and reach a consensus. A
walkthrough can be pre-planned or organised based on the needs.
12. Summarize the reasons for selecting the test tool for automation
These skills include scripting, collaboration, source-code management, Kubernetes, security, testing,
observability, monitoring, and network awareness (among others).
14. Can you make the comparison between metrics and measurement?
Metrics and measurements are similar enough that the two terms are commonly used interchangeably.
The key difference is that a metric is based on standardized procedures, calculation methods and
systems for generating a number. A measurement could be taken with a different technique each
time.
17. Give the formula for defects per 100 hours of testing.
PART –B
1.Describe briefly about the various types of test automation and
scope of automation?
TEST AUTOMATION:
Automation testing, or more accurately test automation, refers to the automation of execution of
test cases and comparing their results with the expected results. That’s a standard definition that
you might find everywhere on the internet. So, let's make it more clear with an example. As you
know, manual testing, is performed by humans while writing each test case separately and then
executing them carefully, automation testing is performed with the help of an automation tool to
run the test cases.
It is widely used to automate repetitive tasks and other testing tasks that are unable to execute by
manual testing. Also, it supports both functional and non-functional testing.
But why should you use automation testing rather than manual testing? Well, there are multiple
reasons for that, such as:
Manual testing for all the workflows and fields is very time-consuming and costly.
Testing various sites manually is very difficult and complex.
Manual testing requires repeated human intervention whereas automation doesn’t.
With automation, the speed of test execution as well as test coverage increases.
These points are enough to give you an idea about why you need automated testing rather than
manual testing. However, that doesn’t mean you have to or should automate every test case; there
is a specific criterion for automating test cases.
1. Unit Testing:
In unit testing, the individual components/units of a web application are tested. In general, unit tests are written by
developers, but automation testers can also write them. Unit testing of a web app is performed during the
development phase. It is also considered as the first level of web app testing.
2. Smoke Testing:
Smoke testing is performed to examine whether the deployed build is stable or not. In short,
verifying the working process of essential features so that testers can proceed with further
testing.
3. Functional Testing:
Functional testing is performed to analyze whether all the functions of your web app works
as expected or not. The sections covered in functional testing involves user interface, APIs,
database, security, client/server applications, and overall functionality of your website.
4. Integration Testing:
In integration testing, the application modules are integrated logically and then tested as a
group. It focuses on verifying the data communication between different modules of your
web app.
5. Regression Testing:
Regression testing is performed to verify that a recent change in code doesn’t affect the
existing features of your web app. In simple terms, it verifies that the old code works in the
same way as they were before making new changes.
Apart from the above testing types, there are some other automated tests as well that need
to be executed, such as data-driven testing, black box testing, keyword testing, etc.
Ever since technology is progressing at a speedy pace, the demand for getting projects
done quicker has increased more than ever. To get projects done fast, the complete
procedures followed during a software life cycle needs to become accelerated as well. In
the area of software testing, automation can be implemented to save cost and time but
large scale testing, automation testing is the way to go. It can be a good choice.
There are a number of necessary advantages from test automation like Increases the
software quality, lessens manual software testing operations and eradicate redundant
testing efforts, create extra systematic repeatable software tests, Minimising repetitive
work and generate more consistent testing outcomes, higher consistency.
Open-Source Tools:
Open source tools are the program wherein the source code is openly published for
use and/or modification from its original design, free of charge.
Open-source tools are available for almost any phase of the testing process, from Test
Case management to Defect tracking. Compared to commercial tools Open source
tools may have fewer features.
Commercial Tools:
Commercial tools are the software which are produced for sale or to serve commercial
purposes.
Commercial tools have more support and more features from a vendor than open-
source tools.
Custom Tools:
In some Testing project, the testing environment, and the testing process has special
characteristics. No open-source or commercial tool can meet the requirement.
Therefore, the Test Manager has to consider the development of the custom tool.
Example: You want to find a Testing tool for the project Guru99 Bank. You want this tool
to meet some specific requirement of the project
You to precisely identify your test tool requirements. All the requirement must
be documented and reviewed by project teams and the management board.
Step 2) Evaluate the Tools and Vendors:
After baselining the requirement of the tool, the Test Manager should
Analyze the commercial and open source tools that are available in the market,
based on the project requirement.
Create a tool shortlist which best meets your criteria
One factor you should consider is vendors. You should consider the vendor’s
reputation, after sale support, tool update frequency, etc. while taking your
decision.
Evaluate the quality of the tool by taking the trial usage & launching a pilot.
Many vendors often make trial versions of their software available for download
Example: After spending considerable time to investigate testing tools, the project team
found the perfect testing tool for the project Guru99 Bank website. The evaluation
results concluded that this tool could
However, after discussing with the software vendor, you found that the cost of this tool
is too high compare to the value and benefit that it can bring to the teamwork.
In such a case, the balance between cost & benefit of the tool may affect the final
decision.
Have a strong awareness of the tool. It means you must understand which is
the strong points and the weak points of the tool
Even with hours spent reading software manual and vendor information, you may still
need to try the tool in your actual working environment before buying the license.
You should have the meeting with the project team, consultants to get the deeper
knowledge of the tool.
Your decision may adversely impact the project, the testing process, and the business
goals; you should spend a good time to think hard about it.
4. (a) List the generic requirements for test tool. Explain with
suitable examples?
Testing Tools:
Tools from a software testing context can be defined as a product that supports one or
more test activities right from planning, requirements, creating a build, test execution,
defect logging and test analysis.
Classification of Tools
Tools can be classified based on several parameters. They include:
The purpose of the tool
The Activities that are supported within the tool
The Type/level of testing it supports
The Kind of licensing (open source, freeware, commercial)
The technology used
Types of Tools:
Testing Metrics:
Testing Metrics are the quantitative measures used to estimate the progress, quality,
productivity and health of the software testing process. The goal of software testing
metrics is to improve the efficiency and effectiveness in the software testing process
and to help make better decisions for further testing process by providing reliable data
about the testing process.
A Metric defines in quantitative terms the degree to which a system, system component,
or process possesses a given attribute. The ideal example to understand metrics would
be a weekly mileage of a car compared to its ideal mileage recommended by the
manufacturer.
Productivity Metrics:
Test case execution productivity metrics
Test case preparation productivity metrics
Defect metrics
Defects by priority
Defects by severity
Defect slippage ratio
Software Testing Metrics are useful for evaluating the health, quality, and progress of a software
testing effort. Without metrics, it would be almost impossible to quantify, explain, or demonstrate
software quality. Metrics also provide a quick insight into the status of software testing efforts,
hence resulting in better control through smart decision making. Traditional Software testing metrics
dealt with the ones’ based on defects that were used to measure the team’s effectiveness. It usually
revolved around the number of defects that got leaked to production named as Defect Leakage or
the Defects that were missed during a release, reflects the team’s ability and product knowledge.
The other team metrics was with respect to percentage of valid and invalid defects. These metrics
can also be captured at an individual level, but generally are measured at a team level.
Software Testing Metrics had always been an integral part of software testing projects, but the
nature and type of metrics collected and shared have changed over time. Top benefits of tracking
software testing metrics include the following:
The general software testing metrics are divided into the following three
categories:
Coverage: It refers to the meaningful parameters for measuring test scope and test success
Progress: Deals with the parameters that help identify test progress to be matched against
success criteria. This metrics is collected iteratively over time and measures metrics like Time
to fix defects, Time to test, etc.
Quality: is used to obtain meaningful measures of excellence, worth, value, etc. of the
testing product and it is difficult to measure it directly
6 (b) What are the steps involved in a metrics program. Briefly explain
each step?
Decision criteria for control type metrics usually take the form of thresholds,
variances or control limits.
Evaluate type metrics (i.e. what good?) may be: "no more than x% failures, with
2/3 minor and 1/3 major".
For predict and evaluate metrics, it is the "level of confidence in a given result"
part of the standard that applies.
Step 9 – Define report mechanism:
This includes defining the report format (table, charts, etc.), data extraction and
reporting cycle (how often data are extracted and the report generated), reporting
mechanisms (the way the report is delivered (hard copy, email, published, etc.),
distribution (who receives the report), and availability (restrictions on metrics access).
Software is tested based on its quality, scalability, features, security, and performance,
including other essential elements. It's common to detect defects and errors in a
software testing process. However, developers must ensure they are taken care of
before launching it to the end-users. This is because fixing an error at an early stage will
cost significantly less than rectifying it at a later stage.
The process of defect detection ensures developers that the end product comprises all
the standards and demands of the client. To ensure the perfection of software, software
engineers follow the defect density formula to determine the quality of the software.
The role of defect density is extremely important in Software Development Life Cycle
(SDLC). First, it is used to identify the number of defects in software. Second, this gives
the testing team to recruit an additional inspection team for re-engineering and
replacements.
Defect density also makes it easier for developers to identify components prone to
defects in the future. As a result, it allows testers to focus on the right areas and give
the best investment return at limited resources.
Module 1 = 5 bugs
Module 2= 10 bugs
Module 3= 20 bugs
Module 4= 15 bugs
Module 5= 5 bugs
The use of defect density is inconsiderable in many ways. However, once developers
set up common defects, they can use this model to predict the remaining defects. Using
this method, developers can establish a database of common defect densities to
determine the productivity and quality of the product.
8.Explain the different types of Test defect metrics under Progress metrics
based on what they measure and what area they focus on.
The test progress metrics discussed in the previous section capture the progress of defects found with
time. The next set of metrics help us understand how the defects that are found can be used to
improve testing and product quality. Not all defects are equal in impact or importance. Some
organizations classify defects by assigning a defect priority (for example, P1, P2, P3, and so on). The
priority of a defect provides a management perspective for the order of defect fixes. For example, a
defect with priority P1 indicates that it should be fixed before another defect with priority P2. Some
organizations use defect severity levels (for example, S1, S2, S3, and so on). The severity of defects
provides the test team a perspective of the impact of that defect in product functionality. For example,
a defect with severity level S1 means that either the major functionality is not working or the
software is crashing. S2 may mean a failure or functionality not working. A sample of what different
priorities and severities mean is given in Table 17.3. From the above example it is clear that priority
is a management perspective and priority levels are relative. This means that the priority of a defect
can change dynamically once assigned. Severity is absolute and does not change often as they reflect
the state and quality of the product. Some organizations use a combination of priority and severity to
classify the defects.
Critical
Basic functionality of the product not
working
Needs to be fixed before next test
cycle starts
Important
Extended functionality of the product
not working
Does not affect the progress of
testing
Fix it before the release
Minor
Product behaves differently
No impact on the test team or
customers
Fix it when time permits
Cosmetic
Minor irritant
Need not be fixed for this release
9. Explain the various generations of automation and the required skills for
each.
There are different "Generations of Automation." The skills required for automation depends on what
generation of automation the company is in or desires to be in the near future.
The automation of testing is broadly classified into three generations.
First generation—Record and Playback Record and playback avoids the repetitive nature of
executing tests. Almost all the test tools available in the market have the record and playback feature.
A test engineer records the sequence of actions by keyboard characters or mouse clicks and those
recorded scripts are played back later, in the same order as theywere recorded. Since a recorded
script can be played back multiple times, it reduces the tedium of the testing function. Besides
avoiding repetitive work, it is also simple to record and save the script. But this generation of tool has
several disadvantages. The scripts may contain hard-coded values, thereby making it difficult to
perform general types of tests. For example, when a report has to use the current date and time, it
becomes difficult to use a recorded script. The handling error condition is left to the testers and thus,
the played back scripts may require a lot of manual intervention to detect and correct error conditions.
When the application changes, all the scripts have to be rerecorded, thereby increasing the test
maintenance costs. Thus, when there is frequent change or when there is not much of opportunity to
reuse or re-run the tests, the record and playback generation of test automation tools may not be very
effective.
Second generation—Data-driven This method helps in developing test scripts that generates the set
of input conditions and corresponding expected output. This enables the tests to be repeated for
different input and output conditions. The approach takes as much time and effort as the product.
However, changes to application does not require the automated test cases to be changed as long as
the input conditions and expected output are still valid. This generation of automation focuses on
input and output conditions using the black box testing approach.
Automation bridges the gap in skills requirement between testing and development; at times it
demands more skills for test teams.
Third generation—Action-driven This technique enables a layman to create automated tests. There
are no input and expected output conditions required for running the tests. All actions that appear on
the application are automatically tested, based on a generic set of controls defined for automation.
The set of actions are represented as objects and those objects are reused. The user needs to specify
only the operations (such as
310
log in, download, and so on) and everything else that is needed for those actions are automatically
generated. The input and output conditions are automatically generated and used. The scenarios for
test execution can be dynamically changed using the test framework that is available in this approach
of automation. Hence, automation in the third generation involves two major aspects—"test case
automation” and “framework design.” We will see the details of framework design in the next
section.
From the above approaches/generations of automation, it is clear that different levels of skills are
needed based on the generation of automation selected. The skills needed for automation are
classified into four levels for three generations as the third generation of automation introduces two
levels of skills for development of test cases and framework
10. What are metrics and measurements? Illustrate the types of product
metrics
1. Direct Measurement:
In direct measurement the product, process or thing is measured directly using standard scale.
2. Indirect Measurement:
In indirect measurement the quantity or quality to be measured is measured using related
parameter i.e. by use of reference.
Metrics:
A metric is a measurement of the level that any impute belongs to a system product or process. There are
4 functions related to software metrics:
1. Planning
2. Organizing
3. Controlling
4. Improving
1. Quantitative:
Metrics must possess quantitative nature.It means metrics can be expressed in values.
2. Understandable:
Metric computation should be easily understood ,the method of computing metric should be
clearly defined.
3. Applicability:
Metrics should be applicable in the initial phases of development of the software.
4. Repeatable:
The metric values should be same when measured repeatedly and consistent in nature.
5. Economical:
Computation of metrics should be economical.
6. Language Independent:
Metrics should not depend on any programming language.
1. Product Metrics:
Product metrics are used to evaluate the state of the product, tracing risks and undercovering
prospective problem areas. The ability of team to control quality is evaluated.
2. Process Metrics:
Process metrics pay particular attention on enhancing the long term process of the team or
organization.
3. Project Metrics:
The project matrix describes the project characteristic and execution process.
Number of software developer
Staffing pattern over the life cycle of software
Cost and schedule
Defects get detected by the testing team and get fixed by the development team. In line with this thought,
defect metrics are further classified in to test defect metrics (which help the testing team in analysis of
product quality and testing) and development defect metrics (which help the development team in analysis
of development activities).
How many defects have already been found and how many more defects may get unearthed are two
parameters that determine product quality and its assessment. For this assessment, the progress of testing
has to be understood. If only 50% of testing is complete and if 100 defects are found, then, assuming that
the defects are uniformly distributed over the product (and keeping all other parameters same), another
80–100 defects can be estimated as residual defects. Figure 17.6 shows testing progress by plotting the
test execution status and the outcome.
The progress chart gives the pass rate and fail rate of executed test cases, pending test cases, and test
cases that are waiting for defects to be fixed. Representing testing progress in this manner will make it is
easy to understand the status and for further analysis. In Figure 17.6, (coloured figure is available on
Illustrations) the “not run” cases reduce in number as the weeks progress, meaning that more tests are
being run. Another perspective from the chart is that the pass percentage increases and fail percentage
decreases, showing the positive progress of testing and product quality. The defects that are blocking the
execution of certain test cases also get reduced in number as weeks progress in the above chart. Hence, a
scenario represented by such a progress chart shows that not only is testing progressing well, but also that
the product quality is improving (which in turn means that the testing is effective). If, on the other hand,
the chart had shown a trend that as the weeks progress, the “not run” cases are not reducing in number, or
“blocked” cases are increasing in number, or “pass” cases are not increasing, then it would clearly point
to quality problems in the product that prevent the product from being ready for release.
1. System to be tested :
o This is the first component of an automation infrastructure. The subsystem of the system
to be tested must be stable, otherwise , test automation will not be cost effective.
2. Test Platform :
o The test platform and facilities, that is, the network setup, on which the system will be
tested, must be in place to carry out the test automation project.
o For example, configuration management utilities, servers, clients, routers and switches
and hubs are necessary to set up the automation environment to execute the test scripts.
3. Test Case Library :
o It is useful to compile libraries of reusable test steps of basic utilities to be used as the
building blocks of automated test scripts.
o Each utility typically performs a distinct task to assist the automation of test cases.
o Examples of such utilities are ssh (secure shell) from client to server, response capture,
error logging, clean up and setup.
4.Tools :
o Different types of tools are required for the development of test scripts.
o Examples of such tools are test automation tools, traffic generation tool, traffic
monitoring tool and support tool.
o The support tools include test factory, requirement analysis, defect tracking, and
configuration management tools.
o Integration of test automation and support tools is critical for the automatic reporting of
defects for failed test cases.
o Similarly, the test factory tool can generate automated test execution trends and result
patterns.
o The procedures describing how to automate test cases using test tools and test case
libraries must be documented.
o A template of an automated test case is useful in order to have consistency across all the
automated test cases developed by different engineers.
o A list of all the utilities and guidelines for using them will enable us to have better
efficiency in test automation.
o In addition, the maintenance procedure for the library must be documented.
6. Administrator :
o The automation framework administrator (i) manages test case libraries, test platforms
and test tools (ii) maintains the inventory of templates, (iii)provides tutorials, (iv) helps
test engineers in writing test scripts using the test case libraries.
o In addition the administrator provides tutorial assistance to the users of test tools and
maintains a liaison with the tool vendors and the users.
Functional testing
Functional testing assesses the software against the set functional requirements/specifications. It
focuses on what the application does and mainly involves black box testing.
Black box testing is also known as behavioral testing and involves testing functionality of elements
without delving into its inner workings. This means that the tester is completely unaware of the
structure or design of the item being tested.
Functional testing focuses primarily on testing the main functions of the system, its basic usability,
its accessibility to users, and the like. Unit testing, integration testing, smoke testing, and user
acceptance testing are all examples of functional testing.
Unit testing
Unit testing involves running tests on individual components or functions in isolation to verify that
they are working as required. It is typically done in the development phase of the application and is
therefore often the first type of automated testing done on an application.
Unit testing is usually performed by the developer and always comes before integration testing.
Unit tests are extremely beneficial because they help identify bugs early in the development phase,
keeping the cost of fixing them as low as possible.
nit-testing techniques can be broken down into three broad categories:
Black box testing: This involves UI testing along with input and output.
White box testing: This tests the functional behavior of the application
Gray box testing: This testing involves executing test cases, test suites, and performing risk
analysis.
Integration testing
Integration testing involves testing all the various units of the application in unity. It focuses on
evaluating whether the system as a whole complies with the functional requirements set for it.
Integration testing works by studying how the different modules interact with each other when
brought together.
Integration testing typically follows unit testing and helps ensure seamless interaction between the
various functions to facilitate a smooth functioning software as a whole.
There are various approaches to integration testing such as the Big Bang Approach, the Top-Down
Approach, the Bottom-Up Approach, and the Sandwich Approach.
Non-functional testing
This testing encompasses testing all the various non-functional elements of an application such as
performance, reliability, usability, etc.
It is different from functional testing in that it focuses on not what the product does but how well it
does it.
Typically, non-functional testing follows functional testing because it is only logical to know that the
product does what it is supposed to before investigating how well it does it.
Some of the most common types of non-functional testing include performance testing, reliability
testing, security testing, load testing, scalability testing, compatibility testing, etc.
(ii)
Since the time innovation is advancing at a rapid movement, the interest for completing ventures faster
has expanded like never before. To complete ventures quickly, the total systems followed during a
product life cycle needs to be quickened also. In the territory of programming testing, computerization
can be executed to save cost and time yet just when utilized in time-taking tasks. About performing
relapse testing, huge scope testing, automated testing is the best approach. It tends to be a decent decision.
There are various vital preferences from test mechanization like Increases the product quality, diminishes
manual programming testing tasks and destroys excess testing endeavors, make extra deliberate
repeatable programming tests, Minimizing monotonous work and create more reliable testing results,
higher consistency.
The extent of automation implies the territory of your Application under Test that will be computerized.
Ensure you have strolled through and realize accurately your group’s test express, the measure of test
information, likewise the climate where tests occur. The following are extra hints assisting you with
deciding the degree:
Specialized achievability
The complexity of test cases
The highlights or capacities that are significant for the business
The degree to which business parts are reused
The capacity to test similar experiments for cross-browser testing
14. Outline project, product and productivity metrics with relevant examples.
.
PART – C
The Automation Framework Design Challenge: Balance Quality, Time, and Resources
The challenge is to build a fit-for-purpose automation framework that is capable of keeping
up with quickly changing automation testing technologies and changes in the system under
test. The challenge is accentuated by the various combinations that are possible using the
wide gamut of available automation tools. Making the right choices in the preliminary design
stage is the most critical step of the process, since this can be the differentiator between a
successful framework and failed investment.
As if this were not tough enough, add to this the even more formidable challenge of
balancing the quality of the framework against the desired utility and the need to develop the
framework within a stipulated timeframe using available resources to ensure the economic
viability of the solution. Therefore, it is very important to benchmark the framework, the
associated development time, and the required resources to ensure the framework's quality
justifies the use of the framework.
B) List and discuss ,how the metrics that can be used for defect prevention.
Defect Prevention is basically defined as a measure to ensure that defects being detected so
far, should not appear or occur again. For facilitating communication simply among
members of team, planning and devising defect prevention guidelines, etc., Coordinator is
mainly responsible.
Coordinator is mainly responsible to lead defect prevention efforts, to facilitate meetings, to
facilitate communication between team members and management, etc. DP board generally
has quarterly plan in which sets some goals at organization level. To achieve these goals,
various methods or activities are generally used and carried out to achieve and complete
these goals.
Methods of Defect Prevention :
For defect prevention, there are different methods that are generally used over a long period
of time. These methods or activities are given below :
2. (a) List the requirements for test tool. Explain any five requirements with a suitable
example.
Software testing tools are required for the betterment of the application or software.
That's why we have so many tools available in the market where some are open-source and
paid tools.
The significant difference between open-source and the paid tool is that the open-source tools
have limited features, whereas paid tool or commercial tools have no limitation for the
features. The selection of tools depends on the user's requirements, whether it is paid or free.
The software testing
tools can be categorized, depending on the licensing (paid or commercial, open-source),
technology usage, type of testing, and so on.
With the help of testing tools, we can improve our software performance, deliver a high-
quality product, and reduce the duration of testing, which is spent on manual efforts.
The software testing tools can be divided into the following:
o Test management tool
o Bug tracking tool
o Automated testing tool
o Performance testing tool
o Cross-browser testing tool
o Integration testing tool
o Unit testing tool
o Mobile/android testing tool
o GUI testing tool
o Security testing tool
Test management tool
Test management tools are used to keep track of all the testing activity, fast data analysis,
manage manual and automation test cases, various environments, and plan and
maintain manual testing
as well.
Reviews are development and maintenance activities that require time and resources. They
should be planned so that there is a place for them in the project schedule. An organization
should develop a review plan template that can be applied to all software projects. The
template should specify the following items for inclusion in the review plan.
• review goals;
• training requirements;
• review steps;
• time requirements;
As in the test plan or any other type of plan, the review planner should specify the goals to be
accomplished by the review. Some general review goals have been stated in Section 9.0 and
include (i) identification of problem components or components in the software artifact that
need improvement, (ii) identification of specific errors or defects in the software artifact, (iii)
ensuring that the artifact conforms to organizational standards, and (iv) communication to the
staff about the nature of the product being developed. Additional goals might be to establish
traceability with other project documents, and familiarization with the item being reviewed.
Goals for inspections and walkthroughs are usually different; those of walkthroughs are more
limited in scope and are usually confined to identification of defects.
Pre conditions and Items to Be Reviewed
• requirements documents;
• design documents;
• code;
• test plans (for the multiple levels);
Note that many of these items represent a deliverable of a major life cycle phase. In fact,
many represent project milestones and the review serves as a progress marker for project
progress. Before each of these items are reviewed certain preconditions usually have to be
met. For example, before a code review is held, the code may have to undergo a successful
compile. The preconditions need to be described in the review policy statement and specified
in the review plan for an item. General preconditions for a review are:
(iii) the individuals responsible for developing the reviewed item indicate
readiness for the review;
(iv) the review leader believes that the item to be reviewed is sufficiently complete for the
review to be useful [8].
The review planner must also keep in mind that a given item to be reviewed may be too large
and complex for a single review meeting. The smart planner partitions the review item into
components that are of a size and complexity that allows them to be reviewed in 1-2 hours.
This is the time range in which most reviewers have maximum effectiveness. For example,
the design document for a procedure-oriented system may be reviewed in parts that
encompass:
(iii)component design.
If the architectural design is complex and/or the number of components is large, then multiple
design review sessions should be scheduled for each. The project plan should have time
allocated for this.
3. Assume you are working in an on-line fast food restaurant system. The system reads
customer orders. Relays orders to the kitchen, calculates the customer’s bill and give change.
It also maintains inventory information. Each wait person has a terminal. Only authorized
wait persons and a system administrator can access the system. Describe the tests that are
suitable to the test the application.
4. (a) Explain the five stop test criteria that are based on quantitative approach.
In the test plan the test manager describes the items to be tested, test cases, tools needed,
scheduled activities, and assigned responsibilities. As the testing effort progresses many
factors impact on planned testing schedules and tasks in both positive and negative ways. For
example, although a certain number of test cases were specified, additional tests may be
required. This may be due to changes in requirements, failure to achieve coverage goals, and
unexpected high numbers of defects in critical modules. Other unplanned events that impact
on test schedules are, for example, laboratories that were supposed to be available are not
(perhaps because of equipment failures) or testers who were assigned responsibilities are
absent (perhaps because of illness or assignments to other higherpriority projects). Given
these events and uncertainties, test progress does not often follow plan. Tester managers and
staff should do their best to take actions to get the testing effort on track. In any event,
whether progress is smooth or bumpy, at some point every project and test manager has to
make the decision on when to stop testing. Since it is not possible to determine with certainty
that all defects have been identified, the decision to stop testing always carries risks. If we
stop testing now, we do save resources and are able to deliver the software to our clients.
However, there may be remaining defects that will cause catastrophic failures, so if we stop
now we will not find them. As a consequence, clients may be unhappy with our software and
may not want to do business with us in the future. Even worse there is the risk that they may
take legal action against us for damages. On the other hand, if we continue to test, perhaps
there are no defects that cause failures of a high severity level. Therefore, we are wasting
resources and risking our position in the market place. Part of the task of monitoring and
controlling the testing effort is making this decision about when testing is complete under
conditions of uncertainly and risk. Managers should not have to use guesswork to make this
critical decision. The test plan should have a set of quantifiable stop-test criteria to support
decision making. The weakest stop test decision criterion is to stop testing when the project
runs out of time and resources. TMM level 1 organizations often operate this way and risk
client dissatisfaction for many projects. TMM level 2 organizations plan for testing and
include stop-test criteria in the test plan. They have very basic measurements in place to
support management when they need to make this decision. Shown in Figure 9.6 and
described below are five stop-test criteria that are based on a more quantitative approach. No
one criteria is recommended. In fact, managers should use a combination of criteria and
cross-checking for better results. The stop-test criteria are as follows.
1. All the Planned Tests That Were Developed Have Been Executed and Passed.
This may be the weakest criterion. It does not take into account the actual dynamics of the
testing effort, for example, the types of defects found and their level of severity. Clues from
analysis of the test cases and defects found may indicate that there are more defects in the
code that the planned test cases have not uncovered. These may be ignored by the testers if
this stop-test criteria is used in isolation.
An organization can stop testing when it meets its coverage goals as specified in the test plan.
For example, using white box coverage goals we can say that we have completed unit test
when we have reached 100% branch coverage for all units. Using another coverage category,
we can say we have completed system testing when all the requirements have been covered
by our tests. The graphs prepared for the weekly status meetings can be applied here to show
progress and to extrapolate to a completion date. The graphs will show the growth of degree
of coverage over the time.
3 . The Detection of a Specific Number of Defects Has Been Accomplished.
This approach requires defect data from past releases or similar projects. The defect
distribution and total defects is known for these projects, and is applied to make estimates of
the number and types of defects for the current project. Using this type of data is very risky,
since it assumes the current software will be built, tested, and behave like the past projects.
This is not always true. Many projects and their development environments are not as similar
as believed, and making this assumption could be disastrous. Therefore, using this stop-
criterion on its own carries high risks.
4 . The Rates of Defect Detection for a Certain Time Period Have Fallen Below a
Specified Level.
The manager can use graphs that plot the number of defects detected per unit time. A graph
such as Figure 9.5, augmented with the severity level of the defects found, is useful. When
the rate of detection of defects of a severity rating under some specified threshold value falls
below that rate threshold, testing can be stopped. For example, a stop-test criterion could be
stated as: ―We stop testing when we find 5 defects or less, with impact equal to, or below
severity level 3, per week.‖ Selecting a defect detection rate threshold can be based on data
from past projects.
Detected seeded defects = Detected actual defects Total seeded defects Total actual defects
Using this ratio we can say, for example, if the code was seeded with 100 defects and 50 have
been found by the test team, it is likely that 50% of the actual defects still remain and the
testing effort should continue. When all the seeded defects are found the manager has some
confidence that the test efforts have been completed.
b) Narrate about the metrics/parameters to be considered for evaluating the software quality.