Se Unit-V
Se Unit-V
Definition of testing
Software testing is the process of checking the quality, functionality, and performance of a software
product before launching. To do software testing, testers either interact with the software manually or
execute test scripts to find bugs and errors, ensuring that the software works as expected.
Testing is intended to show that a program does what it is intended to do and to discover
program defects before it is put into use. When you test software, you execute a program using
artificial data. You check the results of the test run for errors, anomalies or information about the
program's non-functional attributes. Testing can reveal the presence of errors, but NOT their
absence. Testing is part of a more general verification and validation process, which also includes static
validation techniques.
Goals of software testing:
To demonstrate to the developer and the customer that the software meets its
requirements.
o Leads to validation testing: you expect the system to perform correctly using a
given set of test cases that reflect the system's expected use.
o A successful test shows that the system operates as intended.
To discover situations in which the behavior of the software is incorrect, undesirable or
does not conform to its specification.
o Leads to defect testing: the test cases are designed to expose defects; the test
cases can be deliberately obscure and need not reflect how the system is normally
used.
o A successful test is a test that makes the system perform incorrectly and so exposes
a defect in the system.
Testing is part of a broader process of software verification and validation (V & V).
The goal of V & V is to establish confidence that the system is good enough for its intended use,
which depends on:
Software purpose: the level of confidence depends on how critical the software is to an
organization.
User expectations: users may have low expectations of certain kinds of software.
Marketing environment: getting a product to market early may be more important than
finding defects in the program.
Software inspections involve people examining the source representation with the aim of discovering
anomalies and defects. Inspections not require execution of a system so may be used before
implementation. They may be applied to any representation of the system (requirements,
design,configuration data, test data, etc.). They have been shown to be an effective technique for
discovering program errors.
Advantages of inspections include:
During testing, errors can mask (hide) other errors. Because inspection is a static process,
you don't have to be concerned with interactions between errors.
Incomplete versions of a system can be inspected without additional costs. If a program
is incomplete, then you need to develop specialized test harnesses to test the parts that are
available.
As well as searching for program defects, an inspection can also consider broader quality
attributes of a program, such as compliance with standards, portability and maintainability.
Inspections and testing are complementary and not opposing verification techniques. Both should
be used during the V & V process. Inspections can check conformance with a specification but not
conformance with the customer's real requirements. Inspections cannot check non-functional
characteristics such as performance, usability, etc.
Typically, a commercial software system has to go through three stages of testing:
Development testing: the system is tested during development to discover bugs and
defects.
Release testing: a separate testing team test a complete version of the system before it is
released to users.
User testing: users or potential users of a system test the system in their own environment.
Development testing
Development testing includes all testing activities that are carried out by the team developing the system:
Unit testing: individual program units or object classes are tested; should focus on testing
the functionality of objects or methods.
Component testing: several individual units are integrated to create composite
components; should focus on testing component interfaces.
System testing: some or all of the components in a system are integrated and the system
is tested as a whole; should focus on testing component interactions.
Unit testing
Unit testing is the process of testing individual components in isolation. It is a defect testing
process. Units may be:
When testing object classes, tests should be designed to provide coverage of all of the features of the
object:
Whenever possible, unit testing should be automated so that tests are run and checked without
manual intervention. In automated unit testing, you make use of a test automation framework (such as
JUnit) to write and run your program tests. Unit testing frameworks provide generic test classes that you
extend to create specific test cases. They can then run all of the tests that you have implemented and
report, often through some GUI, on the success of otherwise of the tests. An automated test has three
parts:
A setup part, where you initialize the system with the test case, namely the inputs and
expected outputs.
A call part, where you call the object or method to be tested.
An assertion part where you compare the result of the call with the expected result. If the
assertion evaluates to true, the test has been successful if false, then it has failed.
The test cases should show that, when used as expected, the component that you are testing does what
it is supposed to do. If there are defects in the component, these should be revealed by test cases. This
leads to two types of unit test cases:
The first of these should reflect normal operation of a program and should show that the
component works as expected.
The other kind of test case should be based on testing experience of where common
problems arise. It should use abnormal inputs to check that these are properly processed
and do not crash the component.
Component testing
Software components are often composite components that are made up of several interacting
objects. You access the functionality of these objects through the defined component interface.
Testing composite components should therefore focus on showing that the component interface behaves
according to its specification. Objectives are to detect faults due to interface errors or invalid assumptions
about interfaces. Interface types include:
Interface errors:
Interface misuse: a calling component calls another component and makes an error in its
use of its interface e.g. parameters in the wrong order.
Interface misunderstanding: a calling component embeds assumptions about the
behavior of the called component which are incorrect.
Timing errors: the called and the calling component operate at different speeds and out-of-
date information is accessed.
Design tests so that parameters to a called procedure are at the extreme ends of their
ranges.
Always test pointer parameters with null pointers.
Design tests which cause the component to fail.
Use stress testing in message passing systems.
In shared memory systems, vary the order in which components are activated.
System testing
System testing during development involves integrating components to create a version of the system
and then testing the integrated system. The focus in system testing is testing the interactions
between components. System testing checks that components are compatible, interact correctly and
transfer the right data at the right time across their interfaces. System testing tests the emergent
behavior of a system.
During system testing, reusable components that have been separately developed and off-the-shelf
systems may be integrated with newly developed components. The complete system is then tested.
Components developed by different team members or sub-teams may be integrated at this stage. System
testing is a collective rather than an individual process.
The use cases developed to identify system interactions can be used as a basis for system testing.
Each use case usually involves several system components so testing the use case forces these
interactions to occur. The sequence diagrams associated with the use case document the
components and their interactions that are being tested.
Test-driven development
1. Start by identifying the increment of functionality that is required. This should normally
be small and implementable in a few lines of code.
2. Write a test for this functionality and implement this as an automated test.
3. Run the test, along with all other tests that have been implemented. Initially, you have not
implemented the functionality so the new test will fail.
4. Implement the functionality and re-run the test.
5. Once all tests run successfully, you move on to implementing the next chunk of functionality.
Code coverage: every code segment that you write has at least one associated test so all
code written has at least one test.
Regression testing: a regression test suite is developed incrementally as a program is
developed.
Simplified debugging: when a test fails, it should be obvious where the problem lies; the
newly written code needs to be checked and modified.
System documentation: the tests themselves are a form of documentation that describe
what the code should be doing.
Regression testing is testing the system to check that changes have not 'broken' previously
working code. In a manual testing process, regression testing is expensive but, with automated testing,
it is simple and straightforward. All tests are rerun every time a change is made to the program. Tests
must run 'successfully' before the change is committed.
Release testing
Release testing is the process of testing a particular release of a system that is intended for use
outside of the development team. The primary goal of the release testing process is to convince
the customer of the system that it is good enough for use. Release testing, therefore, has to show
that the system delivers its specified functionality, performance and dependability, and that it does not
fail during normal use. Release testing is usually a black-box testing process where tests are only
derived from the system specification.
Release testing is a form of system testing. Important differences:
A separate team that has not been involved in the system development, should be
responsible for release testing.
System testing by the development team should focus on discovering bugs in the system
(defect testing). The objective of release testing is to check that the system meets its
requirements and is good enough for external use (validation testing).
Requirements-based testing involves examining each requirement and developing a test or tests for
it. It is validation rather than defect testing: you are trying to demonstrate that the system has properly
implemented its requirements.
Scenario testing is an approach to release testing where you devise typical scenarios of use and use
these to develop test cases for the system. Scenarios should be realistic and real system users should be
able to relate to them. If you have used scenarios as part of the requirements engineering process, then
you may be able to reuse these as testing scenarios.
Part of release testing may involve testing the emergent properties of a system, such as performance
and reliability. Tests should reflect the profile of use of the system. Performance tests usually involve
planning a series of tests where the load is steadily increased until the system performance becomes
unacceptable. Stress testing is a form of performance testing where the system is deliberately overloaded
to test its failure behavior.
User testing
User or customer testing is a stage in the testing process in which users or customers provide input
and advice on system testing. User testing is essential, even when comprehensive system and release
testing have been carried out. Types of user testing include:
Alpha testing: users of the software work with the development team to test the software
at the developer's site.
Beta testing: a release of the software is made available to users to allow them to
experiment and to raise problems that they discover with the system developers.
Acceptance testing: customers test a system to decide whether or not it is ready to be
accepted from the system developers and deployed in the customer environment.
In agile methods, the user/customer is part of the development team and is responsible for making
decisions on the acceptability of the system. Tests are defined by the user/customer and are integrated
with other tests in that they are run automatically when changes are made. Main problem here is
whether or not the embedded user is 'typical' and can represent the interests of all system stakeholders.
Use of Acceptance Testing
1. To find the defects missed during the functional testing phase.
2. How well the product is developed.
3. A product is what actually the customers need.
4. Feedback help in improving the product performance and user experience.
5. Minimize or eliminate the issues arising from the production.
Advantages of Acceptance Testing
1. This testing helps the project team to know the further requirements from the users
directly as it involves the users for testing.
2. Automated test execution.
3. It brings confidence and satisfaction to the clients as they are directly involved in the
testing process.
4. It is easier for the user to describe their requirement.
5. It covers only the Black-Box testing process and hence the entire functionality of the
product will be tested.
Alpha testing
Alpha testing is conducted in the organization and tested by a representative group of end-users at the
developer's side and sometimes by an independent team of testers.
Alpha testing is simulated or real operational testing at an in-house site. It comes after the unit testing,
integration testing, etc. Alpha testing used after all the testing are executed.
It can be a white box, or Black-box testing depends on the requirements - particular lab environment and
simulation of the actual environment required for this testing.
What is the alpha testing process?
1. Requirement Review: Review the design of the specification and functional requirement
2. Test Development: Test development is base on the outcome of the requirement review.
Develop the test cases and test plan.
3. Test case design: Execute the test plan and test cases.
4. Logging Defects: Logging the identified and detected bug found in the application.
5. Bug Fixation: When all the bugs are identified and logged, then there is a need to fix the bug.
6. Retesting: When all the issues are solved, and fixed retesting is done.
Alpha testing ensures that the software performs flawlessly and does not impact the reputation of the
organization; the company implements final testing in the form of alpha testing. This testing executed into
two phases.
There are two phases of alpha testing.
First Phase: In-house developers of software engineers do the first phase of testing. In this phase, the
tester used hardware debugger or hardware aided debugger to catches the bugs quickly. During the alpha
testing, a tester finds a lot of bugs, crashes, missing features, and docs.
Second Phase: The second phase involves the quality assurance staff performs the alpha testing by
involving black box and white box techniques.
o One of the benefits of alpha testing is it reduces the delivery time of the project.
Beta Testing
Beta testing is a type of User Acceptance Testing among the most crucial testing, which performed before
the release of the software. Beta Testing is a type of Field Test. This testing performs at the end of
the software testing life cycle. This type of testing can be considered as external user acceptance testing.
It is a type of salient testing. Real users perform this testing. This testing executed after the alpha testing.
In this the new version, beta testing is released to a limited audience to check the accessibility, usability,
and functionality, and more.
o Beta testing is the last phase of the testing, which is carried out at the client's or customer's site.
The beta version of the software is delivered to a restricted number of users to accept their feedback and
suggestions on quality improvement. Hence, there are two types of beta version:
1) Closed beta version: Closed beta version, also known as a private beta, it is released to a group of
selected and invited people. Those people will test the software and evaluate their features and
specifications. This beta version represents the software which is capable of delivering value, but it is not
ready to be used by everyone. Because it shows the issues like lack of documentation or missing vital
features.
2) Open beta version: Open beta is also known as a public beta. The open beta opened to the public.
Any user as a tester can assess the beta version to provide the relevant feedback and reviews. Open beta
version improves the quality of the final release. This version helps to find the various undetected errors
and issues.
The beta testing process orients this beta version.
Beta testing performed at the end of the software testing lifecycle. Beta testing offers numerous benefits
to testers, software developer, as well as the users. In the assistance of this type of testing, it enables
developers, testers to test the product before its release in the market. The
4. It helps to detect the defect and issues in the system, which is overlooked and undetected by the
team of software testers.
5. Beta testing helps the user to install, test, and send feedback regarding the developed software.
1. In this type of testing, a software engineer has no control over the process of the testing, as the
users in the real-world environment perform it.
2. This testing can be a time-consuming process and can delay the final release of the product.
3. Beta testing does not test the functionality of the software in depth as software still in development.
4. It is a waste of time and money to work on the feedback of the users who do not use the software
themselves properly.