100% found this document useful (1 vote)
279 views

Notes ITS670

The document discusses different phases of testing activities. It begins by outlining prerequisites for testing like having the proper environment and applications. It then details the main testing activities: identifying test conditions, designing test cases, building test cases by documenting them, executing the test cases, and comparing actual vs expected outcomes. It focuses on identifying test conditions through various techniques. Test cases are designed as steps with inputs and expected outcomes. Test cases are then built by preparing documentation. Component testing is explored where individual programs are tested by programmers. The goals are to test functions and internal aspects through both black box and white box testing.

Uploaded by

Bella Unknwn
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
279 views

Notes ITS670

The document discusses different phases of testing activities. It begins by outlining prerequisites for testing like having the proper environment and applications. It then details the main testing activities: identifying test conditions, designing test cases, building test cases by documenting them, executing the test cases, and comparing actual vs expected outcomes. It focuses on identifying test conditions through various techniques. Test cases are designed as steps with inputs and expected outcomes. Test cases are then built by preparing documentation. Component testing is explored where individual programs are tested by programmers. The goals are to test functions and internal aspects through both black box and white box testing.

Uploaded by

Bella Unknwn
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Chapter 2: Testing Activities

§ Prerequisities for Testing Activities


o Prerequisite: a condition or requirement before going to start the execution
o Should be available & satisfy minimal requirements
o Eg: environment, browser, application type.

§ Testing Activities
1. Identify test conditions: determine what can be tested
2. Design test cases: determine how to test the conditions)
3. Build test cases: prepare the test cases in documentation)
4. Execute test cases
5. Compare the test cases: compare the actual outcome vs expected
outcome (why? to identify the problem)

§ Identify Test Condition


o Test condition is an item or event that could be verified by a test
Example: when user want to book, user have to login.
o Prioritize test conditions - make sure the main function can be used
Example: develop a booking system so make sure can book.
o Testing techniques:
1. Equivalence partitioning
2. Boundary value analysis
3. Cause-effect graphing (draw a graph)
4. Error guessing (calling an expert to test)
5. Unique testing (developer test the coding
o Test conditions are descriptions of the circumference that could be
examined. Documentations are in:
1. Brief sentences
2. Entries of tables
3. Flow graph (diagrams)
Eg: in test scenarios, validate in general while in test conditions, validate the
username and password
§ Design Test Cases
o Test case is a set of tests performed in a sequence.
o Test case designs will produce a number of tests comprising:
1. specific input values
2. expected outcomes
3. any other information needed for the test to run, such as
environment prerequisites. (example: internet connection)
o Expected outcome includes:
1. Things that should be output or created
2. Things that should not be changed or updated (in a database, for
example)
3. Things that should not be changed
4. And things that should be deleted
o Example:
Step Input Expected outcome Test conditions

1 Create a new order for any Order confirmation message VB10


one standard order item, displayed VB23
setting order quantity to
exactly 100

2 Confirm the order Purchase order printed with VB10


correct details VB23

3 Print a new orders report New orders report printed VB10


showing just this one new VB23
order
4 Cancel the order Purchase order cancellation V8
notice printed with correct
details

5 Print a new orders report Report printed showing no V8


outstanding purchase orders

§ Build Test Cases


o Test Scripts is the data and/or instructions with a formal syntax, used by a
test execution automation tool, typically held in a file
o The test cases are implemented by preparing test scripts, test inputs, test
data, and expected outcomes (example: put in a piece of paper, compile
them and keep it)
o Example of documentation: Software Test Description (SDD)
o Test Script = Test Procedure
o Pre-conditions: must be implemented so that the tests can be executed
o may require special hardware or software
o expected outcomes may be organized into files for automation tools to use

§ Execute Test Cases


o Software under test is executed using the test case
o manual testing: consists of the tester sitting down and follows the printed
manual procedure.
o Automated testing: starts the test tool and tells it which test case to execute

§ Compare Test Outcome & Expected Outcome


o Comparing the actual and expected outcomes of the system
o Difference between comparing and verifying: a tool can compare but
cannot verify
o A tool can compare one set of test outcomes to another, and can flag any
differences between the two.
o The tool cannot say whether or not the outcomes are correct – this
verification, and is normally done by tester.
o It is the tester who confirms or insures that the results being compared are
in fact correct
Chapter 3: Testing the Program

§ Testing Practices
o 4 common levels of testing:
- Component testing: individual programs
- Integration testing: interface (modules are communicating)
- System testing: groups of programs
- Acceptance testing: verify readiness
o Test from small units and slowly to bigger until the whole system has
been tested completely and customers accepted it
§ Component Testing
o Objective:
- To confirm the system is coded correctly
- To test internal component aspects
o Who test it? Programmer
o What is tested?
- Function (black box)
- Codes (white box)
- Extremes and boundaries are explored
o Purpose of testing individual components:
- to discover discrepancies between the module’s interface
specification and its actual behavior
o Example: create add customer
(programmer feels okay if the system can save the customers’ details)
o Objectives of component testing
to verify:
- The object (CSU, procedure) satisfies the allocated
requirements
- The algorithm are correct
- The structure of the object is in accordance with its
documented design (interface, processed data)
o Component Testing Activities:
- Define objectives, resources, responsibilities and schedule
- Design the tests sets (inputs, expected results and evaluation
criteria)
- Write the test sets (procedures, stubs, drivers)
- Execute the tests (produce the test results)
- Evaluate the test results (actual vs expected outcome)
o Basic Test Types:
- Functional tests (black box testing)
• Do not look at the interior code
• Just test the functions
• Example: delete customer function runs smoothly
• Objectives:
Ö To detect the incorrect or missing functions
Ö Interface errors (example: click add button
and check whether it adds or not)
Ö Errors in data structures or external database
access
Ö Performance errors
Ö Initialization and termination errors
• Methods of testing:
Ö Equivalence partitioning
§ to reduce the number of input data tested
§ example: system has a function to calculate
all even numbers
Ö Boundary value analysis
Ö Cause-effect graphing
Ö Sampling
§ Similar to partitioning method but more
complex
Ö Robustness
§ like exception handling
§ example: if wrong input, there will be a pop-
up feature
• Test Environment
§ Test based on execution of subprogram
§ Driver: procedures for calling the module to be tested
§ Stub: replaces the modules called by the tested CSU
• Driver
§ main program that accept the test case, passes data
to the module
§ to check values obtained
§ evaluate results
§ control information needed
• Stub
§ Serves to replace modules that are subordinate to the
module
§ Stimulate into simple manner
§ Replace particular feature

- Structural tests (white box testing)


• Look at the code
• Check whether the code runs smoothly or not
• Methods:
1. Control/flow graph
2. Instruction block (IB)
3. Decision-to-decision path (DDP)
§ Integrating Testing
o Intermediate phase between unit testing and system testing
Chapter 4: Testing the System
§ Functional Testing
o Definition: a type of testing that verifies each function of the software
application
o Black box testing: testing technique which functionality of application under
test (AUT) is tested
o Focus on input & output without bothering the internal
o Tested for:
- Provide appropriate input
- Verify the output
- Compare actual result with expected result
o Goals:
- To assure the software fulfills the requirements
- To concern with program specifications
- To validate that there are no issues after release
- To determine how closely a program matches specifications
o Process of functional:
- Determine implemented functions
- Prepare input based on specification
- Determine output based on specification
- Perform test cases
- Observe the result, compare with expected result
o Types of function:
- Smoke testing: most important functions work
- Sanity testing: functionally works
- Regression testing: old code still works once changes is made
- User acceptance testing: tested in real world
- Unit testing
- Integration testing
- System testing
o Tools: ranorex, soapui, testIO, JUnit
o Advantages:
- Test are done from user point of view
- Ensure deliver a bug-free product
- Tester no need to know how it is implemented
o Disadvantages:
- Small number of input can be tested
- Test can be redundant
- Logical error can be missed out
o Best practices:
1. Start writing testing cases early (in requirement analysis & design phase)
2. Collect information
3. Make test plan & test cases
4. Execute the test
5. Consider automated testing

§ Performance Testing
o Definition: process of determining the speed, responsive & stability of a
device, network, program under a workload.
o Process of performance:
1. Identify test environment
2. Determine performance criteria
3. Plan & design performance test
4. Configure test environment
5. Implement test design
6. Run test
7. Analyze, tune & retest
o Type of performance testing:
1. Load testing: help to understand behavior of system
2. Stress testing: see how it works above its limit
3. Soak testing: simulates steady increase of end users/ test it in long term
sustainability
o Stakeholder and roles:
- Performance tester/engineer:
o Understand non-functional requirements
o Analyse artificial business scenario
- Lead performance tester:
o Conduct meeting to identify scope

§ Acceptance Testing
o Definition: testing process completed by testing team & signed off
o Entire product/application is handed over to customers
o To test its acceptability
o Process of acceptance testing
1. Analyze requirements
2. Create acceptance testing phase
3. Identify test scenarios
4. Create test cases
5. Prepare test data
6. Run & record test results
7. Confirm business objectives
o Type of acceptance testing:
1. User acceptance testing (UAT)
• Beta/end-user testing
• Testing by user/client
2. Business acceptance testing (BAT)
• Focus on business advantages
3. Contract acceptance testing (CAT)
• Contract which specifies that once the product goes live, the
acceptance test must be performed (within predetermines
period)
o Stakeholder:
- Real end users
- Developer
- Project manager
- Team members
§ Installation
o Definition: performance to check if software has been correctly installed
with all features
o Process of installing testing
1. Check product for other version
2. Verify for installation
3. Display instructions on installer correctly
4. Break the installation process in the middle
5. Check for disk space
6. Monitor for low disk space
7. Test the facility in the installer package
8. Complete installation, verify the registry changes
9. Test for uninstallation (check all files have been deleted)
o Types of installation testing:
1. Silent installation
2. Unattended installation
3. Network installation
4. Clean installation
5. Automated installation

§ Safety Critical System


o Definition: to minimize the risk & hazard associated with safety of software
& environment
o A system where human safety is dependent upon the correct operation of
the system
o Where failure/malfunction may result in death/serious injury
o Process of probability risk assessment (PRA):
1. Perform primary hazard analysis
2. Calculate serenity to each impact, classify them
3. Calculate probability of occurrence, classify them
4. Calculate assessment of risk -> combine impact & probability
o Type of testing safety critical system:
o Probability risk assessment (PRA)
o Failure mode effects & contically analysis (FMECA)
o Hazard & operability analysis (HAZOP)
o Fish bone diagram

You might also like