SOFTWARE TESTING - QA Life Cycle
SOFTWARE TESTING - QA Life Cycle
QA Life Cycle
It is a integrated system of methodology activity involving like planning, implementation, assessment,
reporting and quality improvement to ensure that the process is of the type and quality needed and
expected by the client/customer.
1. Test requirements,
2. Test planning,
3. Test design,
4. Test execution and Defect logging,
6. Test reports and acceptance,
7. Sign off.
Test Requirements
1. Requirement Specification documents
2. Functional Specification documents
3. Design Specification documents (use cases, etc)
4. Use case Documents
5. Test Trace-ability Matrix for identifying Test Coverage
Test Planning
1. Test Scope, Test Environment
2. Different Test phase and Test Methodologies
3. Manual and Automation Testing
4. Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc
5. Evaluation & identification? Test, Defect tracking tools
Test Design
1. Test Traceability Matrix and Test coverage
2. Test Scenarios Identification & Test Case preparation
3. Test data and Test scripts preparation
4. Test case reviews and Approval
5. Base lining under Configuration Management
Test Execution and Defect Tracking
1. Executing Test cases
2. Testing Test Scripts
3. Capture, review and analyze Test Results
4. Raised the defects and tracking for its closure
Signoff
Signoff template provides a checklist format for customer that can be used for reviewing a new
system's functionality and other attributes before closing a purchase order or accepting a delivery. It
includes checklist areas for functional tests, documentation reviews, issue recording, enhancement
requests.
This form can be used standalone for this purpose, or it can be used as the short-form checklist and
signoff form accompanying a written User Acceptance Test Plan or Beta Test Plan (see our templates
for those 2 documents).
NOTE: This form can also be adapted for review and acceptance of any deliverable between a
customer and a provider. Whether the deliverable is a recommendations report from consultants, a
user manual from a technical publications firm, a physical hardware system, a software application, a
plan for a marketing campaign, etc., this form can be used to list what's expected by the customer,
record results of the acceptance review or tests, record open issues to be corrected, and ultimately
document acceptance by the customer.
How to use?
When preparing to accept a deliverable—system, product, report, etc.—from a supplier, fill out your
version of this form to include the items you want to test and/or review. If possible, consider ahead of
time whether any discrepancies will be acceptable for each item. Schedule the review/tests with the
supplier and discuss expectations. When you perform the reviews or tests, mark the performance of
each item and indicate whether each result is acceptable—will the deliverable be accepted with this
issue? Finally, review overall results with supplier, timeline for issue resolution, and whether re-test
will be required.
Software development
Core activities
Processes
Requirements
Design
Engineering
Construction
Testing
Debugging
Deployment
Maintenance
Agile
Cleanroom
Incremental
Prototyping
Spiral
Waterfall
Methodologies and frameworks
DevOps
DSDM
FDD
IID
Kanban
Lean SD
MDD
MSF
PSP
RAD
SAFe
Scrum
SEMAT
TSP
UP
V-Model
XP
Supporting disciplines
Configuration management
Documentation
Software quality assurance (SQA)
Project management
User experience
Practices
ATDD
BDD
CCO
CI
CD
DDD
PP
Stand-up
TDD
Tools
Compiler
Debugger
Profiler
GUI designer
Modeling
IDE
Build automation
Release automation
Infrastructure as Code
Testing
CMMI
IEEE standards
ISO 9001
ISO/IEC standards
SWEBOK
PMBOK
BABOK
v
t
e
Regression testing is a type of software testing which verifies that software which was
previously developed and tested still performs the same way after it was changed or interfaced
with other software. Changes may include software
enhancements, patches, configuration changes, etc. During regression testing, new software
bugs or regressions may be uncovered. Sometimes a software change impact analysis is
performed to determine what areas could be affected by the proposed changes. These areas
may include functional and non-functional areas of the system.
The purpose of regression testing is to ensure that changes such as those mentioned above
have not introduced new faults.[1] One of the main reasons for regression testing is to determine
whether a change in one part of the software affects other parts of the software. [2]
Common methods of regression testing include re-running previously completed tests and
checking whether program behavior has changed and whether previously fixed faults have re-
emerged. Regression testing can be performed to test a system efficiently by systematically
selecting the appropriate minimum set of tests needed to adequately cover a particular change.
In contrast, non-regression testing aims to verify whether, after introducing or updating a given
software application, the change has had the intended effect.
Contents
[hide]
1Background
2Techniques
o 2.1Retest all
o 2.2Regression test selection
o 2.3Test case prioritization
2.3.1Types of test case prioritization
o 2.4Hybrid
3Benefits and drawbacks
4Uses
5See also
6References
7External links
Background[edit]
As software is updated or changed, emergence of new faults and/or re-emergence of old faults is
quite common. Sometimes re-emergence occurs because a fix gets lost through poor revision
control practices (or simple human error in revision control). Often, a fix for a problem will be
"fragile" in that it fixes the problem in the narrow case where it was first observed but not in more
general cases which may arise over the lifetime of the software. Frequently, a fix for a problem in
one area inadvertently causes a software bug in another area. Finally, it may happen that, when
some feature is redesigned, some of the same mistakes that were made in the original
implementation of the feature are made in the redesign.
Therefore, in most software development situations, it is considered good coding practice, when
a bug is located and fixed, to record a test that exposes the bug and re-run that test regularly
after subsequent changes to the program.[3] Although this may be done through manual
testing procedures using programming techniques, it is often done using automated testing tools.
[4]
Such a test suite contains software tools that allow the testing environment to execute all the
regression test cases automatically; some projects even set up automated systems to re-run all
regression tests at specified intervals and report any failures (which could imply a regression or
an out-of-date test).[5] Common strategies are to run such a system after every successful
compile (for small projects), every night, or once a week. Those strategies can be automated by
an external tool.
Regression testing is an integral part of the extreme programming software development
method. In this method, design documents are replaced by extensive, repeatable, and
automated testing of the entire software package throughout each stage of the software
development process. Regression testing is done after functional testing has concluded, to verify
that the other functionalities are working.
In the corporate world, regression testing has traditionally been performed by a software quality
assurance team after the development team has completed work. However, defects found at this
stage are the most costly to fix. This problem is being addressed by the rise of unit testing.
Although developers have always written test cases as part of the development cycle, these test
cases have generally been either functional tests or unit tests that verify only intended outcomes.
Developer testing compels a developer to focus on unit testing and to include both positive and
negative test cases.[6]
Techniques[edit]
The various regression testing techniques are:
Retest all[edit]
This technique checks all the test cases on the current program to check its integrity. Though it is
expensive as it needs to re-run all the cases, it ensures that there are no errors because of the
modified code.[7]
Regression test selection[edit]
Unlike Retest all, this technique runs a part of the test suite (owing to the cost of retest all) if the
cost of selecting the part of the test suite is less than the Retest all technique. [7]
Test case prioritization[edit]
Prioritize the test cases so as to increase a test suite's rate of fault detection. Test case
prioritization techniques schedule test cases so that the test cases that are higher in priority are
executed before the test cases that have a lower priority. [7]
Types of test case prioritization[edit]
General prioritization - Prioritize test cases that will be beneficial on subsequent versions
Version-specific prioritization - Prioritize test cases with respect to a particular version of
the software.
Hybrid[edit]
This technique is a hybrid of Regression Test Selection and Test Case Prioritization. [7]
Uses[edit]
Regression testing can be used not only for testing the correctness of a program, but often also
for tracking the quality of its output. [10] For instance, in the design of a compiler, regression testing
could track the code size, and the time it takes to compile and execute the test suite cases.
Also as a consequence of the introduction of new bugs, program maintenance requires far more
system testing per statement written than any other programming. Theoretically, after each fix
one must run the entire batch of test cases previously run against the system, to ensure that it
has not been damaged in an obscure way. In practice, such regression testing must indeed
approximate this theoretical idea, and it is very costly.
Regression tests can be broadly categorized as functional tests or unit tests. Functional tests
exercise the complete program with various inputs. Unit tests exercise individual
functions, subroutines, or object methods. Both functional testing tools and unit testing tools tend
to be automated and a third-party product that are not part of the compiler suite. A functional test
may be a scripted series of program inputs, possibly even involving an automated mechanism for
controlling mouse movements and clicks. A unit test may be a set of separate functions within
the code itself, or a driver layer that links to the code without altering the code being tested.
See also[edit]
Characterization test
Quality control
Smoke testing
Test-driven development
References[edit]
1. Jump up^ Myers, Glenford (2004). The Art of Software Testing. Wiley. ISBN 978-0-471-
46912-4.
2. Jump up^ Savenkov, Roman (2008). How to Become a Software Tester. Roman
Savenkov Consulting. p. 386. ISBN 978-0-615-23372-7.
3. Jump up^ Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best
Practices in Software Management. Wiley-IEEE Computer Society Press. p. 73. ISBN 0-470-
04212-5.
4. Jump up^ Automate Regression Tests When Feasible, Automated Testing: Selected
Best Practices, Elfriede Dustin, Safari Books Online
5. Jump up^ daVeiga, Nada (2008-02-06). "Change Code Without Fear: Utilize a
Regression Safety Net". Dr. Dobb's Journal.
6. Jump up^ Dudney, Bill (2004-12-08). "Developer Testing Is 'In': An interview with Alberto
Savoia and Kent Beck". Retrieved 2007-11-29.
7. ^ Jump up to:a b c d "UNDERSTANDING REGRESSION TESTING TECHNIQUES". Citeseerx.
2008.
8. ^ Jump up to:a b c "Regression Testing Minimisation, Selection and Prioritisation: A Survey; S.
Yoo, M. Harman". Published online in Wiley InterScience. 2007.
9. Jump up^ "Efficient Regression Tests for Database Applications". Springer Journal.
2006.
10. Jump up^ Kolawa, Adam. "Regression Testing, Programmer to Programmer". Wrox.
External links[edit]
Microsoft regression testing recommendations
Gauger performance regression visualization tool
What is Regression Testing by Scott Barber and Tom Huston
Categories:
Software testing
Extreme programming