Testing Questions
Testing Questions
Interoperability Testing can validate that two products work together and yet neither may
be in compliance with the specification. This often happens when sets of two products are
tested in a pair-wise manner but the sets are not tested with each other. This situation
occurs for various reasons - foremost being the lack of universal test cases and unclear
testing procedures. The principal risk in this sort of testing is that pair-wise subsets of the
product may be interoperable, yet the overall set of products fails to communicate
properly.
Outline the main phases found in the waterfall model and provide evidence that supports
this statement in the context of the waterfall when used in practice.
The waterfall model has steps in sequence (below) where the customer
requirements are progressively refined to the point that coding can take place.
This type of model is also called a linear or sequential model. Each
work-product or activity is completed before moving on to the next
"sequentially".
Testing in the waterfall model is "iterative" and carried out when the code
has been fully developed. Once done a decision can be made in whether the
product can be released. In effect it is more of a quality check.
STEPS
Requirements Spec
Functional Spec
Technical Spec
Program Spec
Coding
Testing
Test Methodology includes the methods to be followed for testing a software. This means
planning and approach for a particular software testing While the Testing Strategies
include a) Black Box Testing b) White box testing
5. Test Passes
A 'test pass' is another name for 'test run ' 'test cycle ' or 'test iteration'. In the software
industry it represents a software application that has been delivered to the test team for
testing purposes and denotes the span of time and activities that occur from the time a test
team receives this application to the time all test execution is finished and results are
delivered to stakeholders who make business decisions based on them.
6. How would you build a test with WinRunner? Rational Visual Test?
If a tester had tested a program 50% and is able to identify major bugs, and contribute in
improving quality of program then we do consider him as a good tester.
(For this tester should have good functional knowledge and basic understanding of program.)
And If a tester had tested a program 100% but still not able to identify Major bugs and is not
able to contribute in improving the quality of program, then that tester is not a good tester.
First of all you will need to collect the requirements of the mug, the rules as to how the
mug shud respond to different input, like hot water, boilng water or cold water. Its
expected effect on its environment, which would be its output value. And basically test
the mug in the different environments while it contains the different types of liquid to see
if it meets its requirements and sticks to its environmental/business rules.
If your organization does not have the resources to deliver requirements or specifications
and they expect you to perform exploratory or ad-hoc testing then it is
your responsibility to verify that the mug performs in alignment with company
expectations whatever they may be.
1) Ask appropriate stakeholders relevant questions that will assist your test effort
a) Business-related: "What is the primary purpose for the mug (consumer use decorative
advertisement)?"
b) Technical: What temperature ranges is this mug expected to hold? or "How will the
enamel respond to prolonged exposure to citrus (more acidic) solutions?"
2) Brainstorm possible scenarios (and edge cases) where the mug may be used.
a) Is the mug designed to take up minimum room on a shelf?
b) When the mug is transported can it withstand the pressure of a reasonable load if other
boxes of product are placed on top of it?
3) Consider aesthetics as they are compared to competitive products.
4) Document what you tested and what you did not test.
5) Document your findings so stakeholders can make informed business decisions based
on them.
To conduct my test, I would follow these steps (which vary between industry, company,
and projects):
1) Understand its business requirements
2) Understand its technical specifications
3) Perhaps call it out in a test plan to determine how cross-functional experts feel about it
4) Ensure the test environment and test tools are set up properly to test it
5) Create test case(s) for it
6) Map its test case(s) to requirements
7) Execute its tests and report defects
8) If defects are found, re-test them when they are fixed
9) Close the issue when test passes
Test plan consists of test cases. Test cases you develop according to requirement and
design documents of the unit, system etc. You may be asked what would you do if you
are not provided with requirements documents. Then, you start creating your test cases
based on functionality of the system. You should mention that this is a bad practice since
there is no way to compare that what you got is what you planned to create.
11) How do you see a QA role in the product development life cycle?
12) What types of documents would you need for QA, QC, and Testing?
For any type of testing we need customer requirements document and functional
specifications document.
The goal of QA is prevention while the goal of testing is detection(detect the bugs)
Verification takes place before validation and not vice versa. Verification evaluates
documents plans code requirements and specifications. Validation evaluates the product
itself. The input of verification are checklists issue lists walkthroughs and inspection
meetings reviews and meetings. The input of validation is the actual testing of an actual
product. The output of verification is a nearly perfect set of documents plans
specifications and requirements
14) What is the exact difference between Integration & System testing, give me
examples with your projec...
integration testing: -
This test begins after two or more programs or application components have been
successfully unit tested. The development team to validate the technical quality or design
of the application conducts it. It is the first level of testing which formally integrates a set
of programs that communicate among themselves via messages or files (a client and its
server(s) a string of batch programs or a set of on-line modules within a dialog or
conversation.)
3) System testing: -
During this event the entire system is tested to verify that all functional information
structural and quality requirements have been met. A Predetermined combination of tests
is designed that when executed successfully satisfy management that the system meets
specifications. System testing verifies the functional quality of the system in addition to
all external interfaces manual procedures restart and recovery and human-computer
interfaces. It also verifies that interfaces between the application and the open
environment work correctly that JCL functions correctly and that the application
functions appropriately with the Database Management System Operations Environment
and any communications system.
test cases
Log the defect in Defect management tool and mention that the defect found in Adhoc
testing.
Essentially a Test Case is a document that carries a Test Case ID no, title, type of test
being conducted, Input, Action or Event to be performed, Expected Output and whether
the Test Case has achieved the desired Output(Yes \ No)Basically Test cases are based on
the Test Plan, which includes each module and what to be tested in each module. Further
each action in the module is further divided into testable components from where the Test
Cases are derived.Since the test case handles a single event at a time normally, as long as
it reflects its relation to the Test plan, it can be called as a good test case. It does not
matter whether the event passes or fails, as long as the component to be tested is
addressed and can be related in the Test Plan, the Test Case can be called a good Test
Case.
3) How will you check that your test cases covered all the requirements
By using traceabiltymatrix.
Traceability matrix means the matrix showing the relationship b/w the requirements &
testcases
Testcases are prepared using FRS. Each company follows their own format.Actually the
testcases are 3 types.
TC 1:- succesful card insertion.TC 2:- unsuccessful operation due to wrong angle card
insertion.TC 3:- unsuccesssful operation due to invalid account card.TC 4:- successful
entry of pin number.TC 5:- unsuccessful operation due to wrong pin number entered 3
times.TC 6:- successful selection of language.TC 7:- successful selection of account
type.TC 8:- unsuccessful operation due to wrong account type selected w/r to that
inserted card.TC 9:- successful selection of withdrawl option.TC 10 :- successful
selection of amount.TC 11:- unsuccessful operation due to wrong denominations.TC 12:-
successful withdrawl operation.Tc 13 :- unsuccessful withdrawl operation due to amount
greater than possible balance.TC 14 :- unsucceful due to lack of amount in ATM.TC 15
:-un due to amount greater than the day limit.TC 16 :- un due to server down.TC 17 :- un
due to click cancel after insert card.TC 18:- un due to click cancel after indert card and
pin no.TC 19:- un due to click cancel after language selection,account type
selection,withdrawl selection, enter amount
8)Chek all the numbers/Characters on the phone working fine by clicking on them..
9)Remove the user from phone book n chek removed properly with name and phone
number
Thanks
G.Srinivasulu
QA Engineer
6) How to write test case of Login window where user name is editable to only upto
8 alpha characters?
3.Check firstname last name cannot accept other than alphabets - negative
4.Check the login name field accept only alphabets numerics and special
characters except dot.
5.Check whether the desired login name is created only when it displays specific
name is available.
6.Check whether desired login name cannot be created with existing login name -
negative
16.Check every time you refresh or selecting i accept it display a new captcha
20.Check when any required field is not entered or entered with the wrong values
it display a message with red color font
1) Compatability Testing
using compatability testing we can find out that our application will be running with all
the compatability like, browser, computer, os,etc.
For e.g:- In IE6 the page look good but if i go and check in IE7 the same page not looks
good shows error "semicolor is missing", "comma is missing" or JS error.
2) New Requirement
First thing is to update the Requirement traceability metrix and find out the affected area.
Write TC.
If its affecting other modules, pls pass the information to the tester who is working on
that particular module . Also update the intergration TC.
3) Defect Report
The defect report format is
Sr no.
Test case id
Title
Input data
Test case description
Expected result
Actual result
Status
Defect Report is written by a test enginner after finding a bug while testing an
Application Under Test (AUT). The defect report will contain the Application Name
Module Name Defect ID Testcase ID Author Seviarity Priority Risk and Brief
Description and the Steps to recreate the specified defect(bug) in the Application Under
Test (AUT).
4)
Impact Analysis
Impact Analysis will be done to find the impact area of the application by adding a new
module. Generally team lead will take the initiate for this.
Team lead will send a mail to client asking for the impact area (if developer is new to
domain), also send a mail to development team and testing team asking for the impacted
area. After getting the response of all three, team lead will do the consolidated report of
all the mails. This consolidated mail will be given to the Test Engineer saying this is the
Impact Analysis Report and these are the impact areas.
Re-testing is a process of testing the fix of the bug in the same version or check whether
the bug is fixed in the same version.
Regression testing is a process of checking whether the fix of the earlier bug break/effect
some area in the application or
1-The exit criteria that was mentioned in the test plan document is achieved.
4-when we dont have enough time to perform more test and have acheived a specified
level of quality.
5-when the cost of fixing a bug ismore then the impact of the bug in the system.