Notes ITS670
Notes ITS670
§ Testing Activities
1. Identify test conditions: determine what can be tested
2. Design test cases: determine how to test the conditions)
3. Build test cases: prepare the test cases in documentation)
4. Execute test cases
5. Compare the test cases: compare the actual outcome vs expected
outcome (why? to identify the problem)
§ Testing Practices
o 4 common levels of testing:
- Component testing: individual programs
- Integration testing: interface (modules are communicating)
- System testing: groups of programs
- Acceptance testing: verify readiness
o Test from small units and slowly to bigger until the whole system has
been tested completely and customers accepted it
§ Component Testing
o Objective:
- To confirm the system is coded correctly
- To test internal component aspects
o Who test it? Programmer
o What is tested?
- Function (black box)
- Codes (white box)
- Extremes and boundaries are explored
o Purpose of testing individual components:
- to discover discrepancies between the module’s interface
specification and its actual behavior
o Example: create add customer
(programmer feels okay if the system can save the customers’ details)
o Objectives of component testing
to verify:
- The object (CSU, procedure) satisfies the allocated
requirements
- The algorithm are correct
- The structure of the object is in accordance with its
documented design (interface, processed data)
o Component Testing Activities:
- Define objectives, resources, responsibilities and schedule
- Design the tests sets (inputs, expected results and evaluation
criteria)
- Write the test sets (procedures, stubs, drivers)
- Execute the tests (produce the test results)
- Evaluate the test results (actual vs expected outcome)
o Basic Test Types:
- Functional tests (black box testing)
• Do not look at the interior code
• Just test the functions
• Example: delete customer function runs smoothly
• Objectives:
Ö To detect the incorrect or missing functions
Ö Interface errors (example: click add button
and check whether it adds or not)
Ö Errors in data structures or external database
access
Ö Performance errors
Ö Initialization and termination errors
• Methods of testing:
Ö Equivalence partitioning
§ to reduce the number of input data tested
§ example: system has a function to calculate
all even numbers
Ö Boundary value analysis
Ö Cause-effect graphing
Ö Sampling
§ Similar to partitioning method but more
complex
Ö Robustness
§ like exception handling
§ example: if wrong input, there will be a pop-
up feature
• Test Environment
§ Test based on execution of subprogram
§ Driver: procedures for calling the module to be tested
§ Stub: replaces the modules called by the tested CSU
• Driver
§ main program that accept the test case, passes data
to the module
§ to check values obtained
§ evaluate results
§ control information needed
• Stub
§ Serves to replace modules that are subordinate to the
module
§ Stimulate into simple manner
§ Replace particular feature
§ Performance Testing
o Definition: process of determining the speed, responsive & stability of a
device, network, program under a workload.
o Process of performance:
1. Identify test environment
2. Determine performance criteria
3. Plan & design performance test
4. Configure test environment
5. Implement test design
6. Run test
7. Analyze, tune & retest
o Type of performance testing:
1. Load testing: help to understand behavior of system
2. Stress testing: see how it works above its limit
3. Soak testing: simulates steady increase of end users/ test it in long term
sustainability
o Stakeholder and roles:
- Performance tester/engineer:
o Understand non-functional requirements
o Analyse artificial business scenario
- Lead performance tester:
o Conduct meeting to identify scope
§ Acceptance Testing
o Definition: testing process completed by testing team & signed off
o Entire product/application is handed over to customers
o To test its acceptability
o Process of acceptance testing
1. Analyze requirements
2. Create acceptance testing phase
3. Identify test scenarios
4. Create test cases
5. Prepare test data
6. Run & record test results
7. Confirm business objectives
o Type of acceptance testing:
1. User acceptance testing (UAT)
• Beta/end-user testing
• Testing by user/client
2. Business acceptance testing (BAT)
• Focus on business advantages
3. Contract acceptance testing (CAT)
• Contract which specifies that once the product goes live, the
acceptance test must be performed (within predetermines
period)
o Stakeholder:
- Real end users
- Developer
- Project manager
- Team members
§ Installation
o Definition: performance to check if software has been correctly installed
with all features
o Process of installing testing
1. Check product for other version
2. Verify for installation
3. Display instructions on installer correctly
4. Break the installation process in the middle
5. Check for disk space
6. Monitor for low disk space
7. Test the facility in the installer package
8. Complete installation, verify the registry changes
9. Test for uninstallation (check all files have been deleted)
o Types of installation testing:
1. Silent installation
2. Unattended installation
3. Network installation
4. Clean installation
5. Automated installation