FAQ'S of Software Testing: Q1. What Is Verification?
FAQ'S of Software Testing: Q1. What Is Verification?
Generally speaking, your resume should tell your story. If you're a college
graduate looking for your first job, a one-page resume is just fine. If you
have a longer story, the resume needs to be longer. Please put your
experience on the resume so resume readers can tell when and for whom
you did what. Short resumes -- for people long on experience -- are not
appropriate. The real audience for these short resumes is people with short
attention spans and low IQ. I assure you that when your resume gets into
the right hands, it will be read thoroughly.
Q17. What makes a good QA/Test Manager?
A: QA/Test Managers are familiar with the software development process;
able to maintain enthusiasm of their team and promote a positive
atmosphere; able to promote teamwork to increase productivity; able to
promote cooperation between Software and Test/QA Engineers, have the
people skills needed to promote improvements in QA processes, have the
ability to withstand pressures and say *no* to other managers when quality
is insufficient or QA processes are not being adhered to; able to
communicate with technical and non-technical people; as well as able to run
meetings and keep them focused.
Q18. What is the role of documentation in QA? A: Documentation plays
a critical role in QA. QA practices should be documented, so that they are
repeatable. Specifications, designs, business rules, inspection reports,
configurations, code changes, test plans, test cases, bug reports, user
manuals should all be documented. Ideally, there should be a system for
easily finding and obtaining of documents and determining what document
will have a particular piece of information. Use documentation change
management, if possible.
Q19. What about requirements?
A: Requirement specifications are important and one of the most reliable
methods of insuring problems in a complex software project is to have poorly
documented requirement specifications. Requirements are the details
describing an application's externally perceived functionality and properties.
Requirements should be clear, complete, reasonably detailed, cohesive,
attainable and testable. A non-testable requirement would be, for example,
"user-friendly", which is too subjective. A testable requirement would be
something such as, "the product shall allow the user to enter their
previously-assigned password to access the application". Care should be
taken to involve all of a project's significant customers in the requirements
process. Customers could be in-house or external and could include end-
users, customer acceptance test engineers, testers, customer contract
officers, customer management, future software maintenance engineers,
salespeople and anyone who could later derail the project. If his/her
expectations aren't met, they should be included as a customer, if possible.
In some organizations, requirements may end up in high-level project plans,
functional specification documents, design documents, or other documents at
various levels of detail. No matter what they are called, some type of
documentation with detailed requirements will be needed by test engineers in
order to properly plan and execute tests. Without such documentation there
will be no clear-cut way to determine if a software application is performing
correctly.
Q20. What is a test plan?
A: A software project test plan is a document that describes the objectives,
scope, approach and focus of a software testing effort. The process of
preparing a test plan is a useful way to think through the efforts needed to
validate the acceptability of a software product. The completed document will
help people outside the test group understand the why and how of product
validation. It should be thorough enough to be useful, but not so thorough
that none outside the test group will be able to read it.
Q21. What is a test case?
A: A test case is a document that describes an input, action, or event and its
expected result, in order to determine if a feature of an application is working
correctly. A test case should contain particulars such as a...
1. Test case identifier;
2. Test case name;
3. Objective;
4. Test conditions/setup;
5. Input data requirements/steps, and
6. Expected results.
Please note, the process of developing test cases can help find problems in
the requirements or design of an application, since it requires you to
completely think through the operation of the application. For this reason, it
is useful to prepare test cases early in the development cycle, if possible.
Q22. What should be done after a bug is found?
A: When a bug is found, it needs to be communicated and assigned to
developers that can fix it. After the problem is resolved, fixes should be re-
tested. Additionally, determinations should be made regarding requirements,
software, hardware, safety impact, etc., for regression testing to check the
fixes didn't create other problems elsewhere. If a problem-tracking system is
in place, it should encapsulate these determinations. A variety of commercial,
problem-tracking/management software tools are available. These tools, with
the detailed input of software test engineers, will give the team complete
information so developers can understand the bug, get an idea of its severity,
reproduce it and fix it.
Q23. What is configuration management?
A: Configuration management (CM) covers the tools and processes used to
control, coordinate and track code, requirements, documentation, problems,
change requests, designs, tools, compilers, libraries, patches, changes made
to them and who makes the changes.
Q24. What if the software is so buggy it can't be tested at all?
A: In this situation the best bet is to have test engineers go through the
process of reporting whatever bugs or problems initially show up, with the
focus being on critical bugs. Since this type of problem can severely affect
schedules and indicates deeper problems in the software development
process, such as insufficient unit testing, insufficient integration testing, poor
design, improper build or release procedures, managers should be notified
and provided with some documentation as evidence of the problem.
Q25. How do you know when to stop testing?
A: This can be difficult to determine. Many modern software applications are
so complex and run in such an interdependent environment, that complete
testing can never be done. Common factors in deciding when to stop are... 1.
Deadlines, e.g. release deadlines, testing deadlines;
2. Test cases completed with certain percentage passed;
3. Test budget has been depleted;
4. Coverage of code, functionality, or requirements reaches a specified point;
5. Bug rate falls below a certain level; or
6. Beta or alpha testing period ends.
Q26. What if there isn't enough time for thorough testing?
A: Since it's rarely possible to test every possible aspect of an application,
every possible combination of events, every dependency, or everything that
could go wrong, risk analysis is appropriate to most software development
projects. Use risk analysis to determine where testing should be focused.
This requires judgment skills, common sense and experience. The checklist
should include answers to the following questions:
1. Which functionality is most important to the project's intended purpose?
2. Which functionality is most visible to the user?
3. Which functionality has the largest safety impact?
4. Which functionality has the largest financial impact on users?
5. Which aspects of the application are most important to the customer?
6. Which aspects of the application can be tested early in the development
cycle?
7. Which parts of the code are most complex and thus most subject to
errors?
8. Which parts of the application were developed in rush or panic mode?
9. Which aspects of similar/related previous projects caused problems?
10. Which aspects of similar/related previous projects had large maintenance
expenses?
11. Which parts of the requirements and design are unclear or poorly thought
out?
12. What do the developers think are the highest-risk aspects of the
application?
13. What kinds of problems would cause the worst publicity?
14. What kinds of problems would cause the most customer service
complaints?
15. What kinds of tests could easily cover multiple functionalities?
16. Which tests will have the best high-risk-coverage to time-required ratio?
Q27. What if the project isn't big enough to justify extensive testing?
A: Consider the impact of project errors, not the size of the project. However,
if extensive testing is still not justified, risk analysis is again needed and the
considerations listed under "What if there isn't enough time for thorough
testing?" do apply. The test engineer then should do "ad hoc" testing, or
write up a limited test plan based on the risk analysis.
Q28. What can be done if requirements are changing continuously?
A: Work with management early on to understand how requirements might
change, so that alternate test plans and strategies can be worked out in
advance. It is helpful if the application's initial design allows for some
adaptability, so that later changes do not require redoing the application from
scratch. Additionally, try to...
1. Ensure the code is well commented and well documented; this makes
changes easier for the developers.
2. Use rapid prototyping whenever possible; this will help customers feel sure
of their requirements and minimize changes.
3. In the project's initial schedule, allow for some extra time to
commensurate with probable changes.
4. Move new requirements to a 'Phase 2' version of an application and use
the original requirements for the 'Phase 1' version.
5. Negotiate to allow only easily implemented new requirements into the
project; move more difficult, new requirements into future versions of the
application.
6. Ensure customers and management understand scheduling impacts,
inherent risks and costs of significant requirements changes. Then let
management or the customers decide if the changes are warranted; after all,
that's their job.
7. Balance the effort put into setting up automated testing with the expected
effort required to redo them to deal with changes.
8. Design some flexibility into automated test scripts;
9. Focus initial automated testing on application aspects that are most likely
to remain unchanged;
10. Devote appropriate effort to risk analysis of changes, in order to
minimize regression-testing needs;
11. Design some flexibility into test cases; this is not easily done; the best
bet is to minimize the detail in the test cases, or set up only higher-level
generic-type test plans;
12. Focus less on detailed test plans and test cases and more on ad-hoc
testing with an understanding of the added risk this entails.
Q29. What if the application has functionality that wasn't in the
requirements?
A: It may take serious effort to determine if an application has significant
unexpected or hidden functionality, which it would indicate deeper problems
in the software development process. If the functionality isn't necessary to
the purpose of the application, it should be removed, as it may have
unknown impacts or dependencies that were not taken into account by the
designer or the customer.
If not removed, design information will be needed to determine added testing
needs or regression testing needs. Management should be made aware of
any significant added risks as a result of the unexpected functionality. If the
functionality only affects areas, such as minor improvements in the user
interface, it may not be a significant risk.
Q30. How can software QA processes be implemented without
stifling productivity?
A: Implement QA processes slowly over time. Use consensus to reach
agreement on processes and adjust and experiment as an organization grows
and matures. Productivity will be improved instead of stifled. Problem
prevention will lessen the need for problem detection. Panics and burnout will
decrease and there will be improved focus and less wasted effort. At the
same time, attempts should be made to keep processes simple and efficient,
minimize paperwork, promote computer-based processes and automated
tracking and reporting, minimize time required in meetings and promote
training as part of the QA process. However, no one, especially talented
technical types, like bureaucracy and in the short run things may slow down
a bit. A typical scenario would be that more days of planning and
development will be needed, but less time will be required for late-night bug
fixing and calming of irate customers.
Q31. What if organization is growing so fast that fixed QA processes
are impossible?
A: This is a common problem in the software industry, especially in new
technology areas. There is no easy solution in this situation, other than...
1. Hire good people (i.e. hire Rob Davis)
2. Ruthlessly prioritize quality issues and maintain focus on the customer;
Everyone in the organization should be clear on what quality means to the
customer.
Q32. How is testing affected by object-oriented designs?
A: A well-engineered object-oriented design can make it easier to trace from
code to internal design to functional design to requirements. While there will
be little affect on black box testing (where an understanding of the internal
design of the application is unnecessary), white-box testing can be oriented
to the application's objects. If the application was well designed this can
simplify test design.
Q33. Why do you recommended that we test during the design
phase?
A: Because testing during the design phase can prevent defects later on. We
recommend verifying three things...
1. Verify the design is good, efficient, compact, testable and maintainable.
2. Verify the design meets the requirements and is complete (specifies all
relationships between modules, how to pass data, what happens in
exceptional circumstances, starting state of each module and how to
guarantee the state of each module).
3. Verify the design incorporates enough memory, I/O devices and quick
enough runtime for the final product.
Q34. What is software quality assurance?
A: Software Quality Assurance, when Rob Davis does it, is oriented to
*prevention*. It involves the entire software development process.
Prevention is monitoring and improving the process, making sure any
agreed-upon standards and procedures are followed and ensuring problems
are found and dealt with. Software Testing, when performed by Rob Davis, is
also oriented to *detection*. Testing involves the operation of a system or
application under controlled conditions and evaluating the results.
Organizations vary considerably in how they assign responsibility for QA and
testing. Sometimes they're the combined responsibility of one group or
individual. Also common are project teams, which include a mix of test
engineers, testers and developers who work closely together, with overall QA
processes monitored by project managers. It depends on what best fits your
organization's size and business structure. Rob Davis can provide QA and/or
Software QA. This document details some aspects of how he can provide
software testing/QA service. For more information, e-mail
[email protected]
11. A pretest meeting is held to assess the readiness of the application and
the environment and data to be tested. A test readiness document is created
to indicate the status of the entrance criteria of the release.
Inputs for this process:
12. Approved Test Strategy Document.
14. Test tools, or automated test tools, if applicable.
15. Previously developed scripts, if applicable.
16. Test documentation problems uncovered as a result of testing.
17. A good understanding of software complexity and module path coverage,
derived from general and detailed design documents, e.g. software design
document, source code and software complexity data.
Outputs for this process:
18. Approved documents of test scenarios, test cases, test conditions and
test data.
19.� Reports of software design issues, given to software developers for
correction.
Q74. How do you execute tests?
A: Execution of tests is completed by following the test documents in a
methodical manner. As each test procedure is performed, an entry is
recorded in a test execution log to note the execution of the procedure and
whether or not the test procedure uncovered any defects. Checkpoint
meetings are held throughout the execution phase. Checkpoint meetings are
held daily, if required, to address and discuss testing issues, status and
activities.
1. The output from theexecution of test procedures is known as test results.
Test results are evaluated by test engineers to determine whether the
expected results have been obtained. All discrepancies/anomalies are logged
and discussed with the software team lead, hardware test lead,
programmers, software engineers and documented for further investigation
and resolution. Every company has a different process for logging and
reporting bugs/defects uncovered during testing.
2. A pass/fail criteria is used to determine the severity of a problem, and
results are recorded in a test summary report. The severity of a problem,
found during system testing, is defined in accordance to the customer's risk
assessment and recorded in their selected tracking tool.
3. Proposed fixes are delivered to the testing environment, based on the
severity of the problem. Fixes are regression tested and flawless fixes are
migrated to a new baseline. Following completion of the test, members of the
test team prepare a summary report. The summary report is reviewed by the
Project Manager, Software QA Manager and/or Test Team Lead.
4. After a particular level of testing has been certified, it is the responsibility
of the Configuration Manager to coordinate the migration of the release
software components to the next test level, as documented in the
Configuration Management Plan. The software is only migrated to the
production environment after the Project Manager's formal acceptance.
5. The test team reviews test document problems identified during testing,
and update documents where appropriate.
Inputs for this process:
6. Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
7. Test tools, including automated test tools, if applicable.
8. Developed scripts.
9. Changes to the design, i.e. Change Request Documents.
10. Test data.
11. Availability of the test team and project team.
12. General and Detailed Design Documents, i.e. Requirements Document,
Software Design Document.
13. A software that has been migrated to the test environment, i.e. unit
tested code, via the Configuration/Build Manager.
14. Test Readiness Document.
15. Document Updates.
Outputs for this process:
16. Log and summary of the test results. Usually this is part of the Test
Report. This needs to be approved and signed-off with revised testing
deliverables.
17. Changes to the code, also known as test fixes. Test document problems
uncovered as a result of testing. Examples are Requirements document and
Design Document problems.
18. Reports on software design issues, given to software developers for
correction. Examples are bug reports on code issues.
19. Formal record of test incidents, usually part of problem tracking.
Base-lined package, also known as tested source and object code, ready for
migration to the next level.
Q75. What testing approaches can you tell me about?
A: Each of the followings represents a different testing approach:
1. Black box testing,
2. White box testing,
3. Unit testing,
4. Incremental testing,
5. Integration testing,
6. Functional testing,
7. System testing,
8. End-to-end testing,
9. Sanity testing,
10. Regression testing,
11. Acceptance testing,
12. Load testing,
13. Performance testing,
14. Usability testing,
15. Install/uninstall testing,
16. Recovery testing,
17. Security testing,
18. Compatibility testing,
19. Exploratory testing, ad-hoc testing,
20. User acceptance testing,
21. Comparison testing,
22. Alpha testing,
23. Beta testing, and
24. Mutation testing.
Q76. What is stress testing?
A: Stress testing is testing that investigates the behavior of software (and
hardware) under extraordinary operating conditions. For example, when a
web server is stress tested, testing aims to find out how many users can be
on-line, at the same time, without crashing the server. Stress testing tests
the stability of a given system or entity. It tests something beyond its normal
operational capacity, in order to observe any negative results. For example, a
web server is stress tested, using scripts, bots, and various denial of service
tools.
Q77. What is load testing?
A: Load testing simulates the expected usage of a software program, by
simulating multiple users that access the program's services concurrently.
Load testing is most useful and most relevant for multi-user systems,
client/server models, including web servers. For example, the load placed on
the system is increased above normal usage patterns, in order to test the
system's response at peak loads. You CAN learn load testing, with little or no
outside help. Get CAN get free information. Click on a link!
Q79. What is the difference between performance testing and load
testing?
A: Load testing is a blanket term that is used in many different ways across
the professional software testing community. The term, load testing, is often
used synonymously with stress testing, performance testing, reliability
testing, and volume testing. Load testing generally stops short of stress
testing. During stress testing, the load is so great that errors are the
expected results, though there is gray area in between stress testing and
load testing. You CAN learn testing, with little or no outside help. Get CAN
get free information. Click on a link!
Q80. What is the difference between reliability testing and load
testing?
A: Load testing is a blanket term that is used in many different ways across
the professional software testing community. The term, load testing, is often
used synonymously with stress testing, performance testing, reliability
testing, and volume testing. Load testing generally stops short of stress
testing. During stress testing, the load is so great that errors are the
expected results, though there is gray area in between stress testing and
load testing.
Q81. What is the difference between volume testing and load
testing?
A: Load testing is a blanket term that is used in many different ways across
the professional software testing community. The term, load testing, is often
used synonymously with stress testing, performance testing, reliability
testing, and volume testing. Load testing generally stops short of stress
testing. During stress testing, the load is so great that errors are the
expected results, though there is gray area in between stress testing and
load testing.
Q82. What is incremental testing?
A: Incremental testing is partial testing of an incomplete product. The goal of
incremental testing is to provide an early feedback to software developers.
Q83. What is software testing?
A: Software testing is a process that identifies the correctness, completenes,
and quality of software. Actually, testing cannot establish the correctness of
software. It can find defects, but cannot prove there are no defects. You CAN
learn software testing, with little or no outside help. Get CAN get free
information. Click on a link!
Q84. What is automated testing?
A: Automated testing is a formally specified and controlled method of formal
testing approach.
Q85. What is alpha testing?
A: Alpha testing is final testing before the software is released to the general
public. First, (and this is called the first phase of alpha testing), the software
is tested by in-house developers. They use either debugger software, or
hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and
this is called second stage of alpha testing), the software is handed over to
us, the software QA staff, for additional testing in an environment that is
similar to the intended use.
Q86. What is beta testing?
A: Following alpha testing, "beta versions" of the software are released to a
group of people, and limited public tests are performed, so that further
testing can ensure the product has few bugs. Other times, beta versions are
made available to the general public, in order to receive as much feedback as
possible. The goal is to benefit the maximum number of future users.
Q87. What is the difference between alpha and beta testing?
A: Alpha testing is performed by in-house developers and software QA
personnel. Beta testing is performed by the public, a few select prospective
customers, or the general public.
Q88. What is clear box testing?
A: Clear box testing is the same as white box testing. It is a testing approach
that examines the application's program structure, and derives test cases
from the application's program logic. You CAN learn clear box testing, with
little or no outside help. Get CAN get free information. Click on a link!
Q89. What is boundary value analysis?
A: Boundary value analysis is a technique for test data selection. A test
engineer chooses values that lie along data extremes. Boundary values
include maximum, minimum, just inside boundaries, just outside boundaries,
typical values, and error values. The expectation is that, if a systems works
correctly for these extreme or special values, then it will work correctly for all
values in between. An effective way to test code, is to exercise it at its
natural boundaries.
Q90. What is ad hoc testing?
A: Ad hoc testing is a testing approach; it is the least formal testing
approach.
Q91. What is gamma testing?
A: Gamma testing is testing of software that has all the required features,
but it did not go through all the in-house quality checks. Cynics tend to refer
to software releases as "gamma testing".
Q92. What is glass box testing?
A: Glass box testing is the same as white box testing. It is a testing approach
that examines the application's program structure, and derives test cases
from the application's program logic.
Q93. What is open box testing?
A: Open box testing is same as white box testing. It is a testing approach
that examines the application's program structure, and derives test cases
from the application's program logic.
Q94. What is black box testing?
A: Black box testing a type of testing that considers only externally visible
behavior. Black box testing considers neither the code itself, nor the "inner
workings" of the software. You CAN learn to do black box testing, with little
or no outside help. Get CAN get free information. Click on a link!
Q95. What is functional testing?
A: Functional testing is same as black box testing. Black box testing a type of
testing that considers only externally visible behavior. Black box testing
considers neither the code itself, nor the "inner workings" of the software.
Q96. What is closed box testing?
A: Closed box testing is same as black box testing. Black box testing a type
of testing that considers only externally visible behavior. Black box testing
considers neither the code itself, nor the "inner workings" of the software.
Q97. What is bottom-up testing?
A: Bottom-up testing is a technique for integration testing. A test engineer
creates and uses test drivers for components that have not yet been
developed, because, with bottom-up testing, low-level components are
tested first. The objective of bottom-up testing is to call low-level
components first, for testing
purposes.
Q98. What is software quality?
A: The quality of the software does vary widely from system to system.
Some common quality attributes are stability, usability, reliability, portability,
and maintainability. See quality standard ISO 9126 for more information on
this subject.
Q99. How do test case templates look like?
A: Software test cases are in a document that describes inputs, actions, or
events, and their expected results, in order to determine if all features of an
application are working correctly. Test case templates contain all particulars
of every test case. Often these templates are in the form of a table. One
example of this table is a 6-column table, where column 1 is the "Test Case
ID Number", column 2 is the "Test Case Name", column 3 is the "Test
Objective", column 4 is the "Test Conditions/Setup", column 5 is the "Input
Data Requirements/Steps", and column 6 is the "Expected Results". All
documents should be written to a certain standard and template. Standards
and templates maintain document uniformity. They also help in learning
where information is located, making it easier for users to find what they
want. Lastly, with standards and templates, information will not be
accidentally omitted from a document. You CAN learn to create test case
templates, with little or no outside help. Get CAN get free information.
Click on a link!
Q100. What is a software fault?
A: Software faults are hidden programming errors. Software faults are errors
in the correctness of the semantics of computer programs.
Q101. What is software failure?
A: Software failure occurs when the software does not do what the user
expects to see.
Q102. What is the difference between a software fault and a
software failure?
A: Software failure occurs when the software does not do what the user
expects to see. A software fault, on the other hand, is a hidden programming
error. A software fault becomes a software failure only when the exact
computation conditions are met, and the faulty portion of the code is
executed on the CPU. This can occur during normal usage. Or, when the
software is ported to a different hardware platform. Or, when the software is
ported to a different complier. Or, when the software gets extended.
Q103. What is a test engineer?
A: Test engineers are engineers who specialize in testing. We, test engineers,
create test cases, procedures, scripts and generate data. We execute test
procedures and scripts, analyze standards of measurements, evaluate results
of system/integration/regression testing.
Q104. What is the role of test engineers?
A: Test engineers speed up the work of the development staff, and reduce
the risk of your company's legal liability. We, test engineers, also give the
company the evidence that the software is correct and operates properly. We
also improve problem tracking and reporting, maximize the value of the
software, and the value of the devices that use it. We also assure the
successful launch of the product by discovering bugs and design flaws,
before...
users get discouraged, before shareholders loose their cool and before
employees get bogged down. We, test engineers help the work of software
development staff, so the development team can devote its time to build up
the product. We, test engineers also promote continual improvement. They
provide documentation required by FDA, FAA, other regulatory agencies, and
your customers. We, test engineers save your company money by
discovering defects EARLY in the design process, before failures occur in
production, or in the field. We save the reputation of your company by
discovering bugs and design flaws, before bugs and design flaws damage the
reputation of your company.
Q105. What is a QA engineer?
A: QA engineers are test engineers, but QA engineers do more than just
testing. Good QA engineers understand the entire software development
process and how it fits into the business approach and the goals of the
organization. Communication skills and the ability to understand various
sides of issues are important. We, QA engineers, are successful if people
listen to us, if people use our tests, if people think that we're useful, and if
we're happy doing our work. I would love to see QA departments staffed with
experienced software developers who coach development teams to write
better code. But I've never seen it. Instead of coaching, we, QA engineers,
tend to be process people.