0% found this document useful (0 votes)
24 views

Chapter 19 Notes

This document discusses strategies for software component testing. It addresses the flexibility and rigidity of component testing strategies, as well as who is responsible for testing. It also describes the spiral model of software testing and how the testing scope broadens with each turn outward on the spiral, from unit testing to system testing. Finally, it discusses criteria for considering when testing is "done" and notes there is no definitive answer, though testing may be considered complete when time or budget constraints are reached.

Uploaded by

Treciouh M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Chapter 19 Notes

This document discusses strategies for software component testing. It addresses the flexibility and rigidity of component testing strategies, as well as who is responsible for testing. It also describes the spiral model of software testing and how the testing scope broadens with each turn outward on the spiral, from unit testing to system testing. Finally, it discusses criteria for considering when testing is "done" and notes there is no definitive answer, though testing may be considered complete when time or budget constraints are reached.

Uploaded by

Treciouh M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

19.

1 A Strategic Approach to Software Testing

1. Flexibility and Rigidity of Component Testing Strategy:

 Component testing strategy should be flexible for customization but rigid


enough for planning and management.

 Flexibility encourages a customized approach, while rigidity promotes planning


and tracking.

2. Responsibility for Component Testing:

 Component testing remains the responsibility of individual software engineers.

 Who performs testing, communication of results, and testing timing depend on


the chosen software integration approach and design philosophy.

3. Strategy and Tactics:

 "Approaches and philosophies" are referred to as strategy and tactics.

 Integration testing techniques, discussed in Chapter 20, often define the team's
development strategy.

19.1.1 Verification and Validation

4. Verification and Validation (V&V):

 V&V is a broader topic encompassing software testing.

 Verification ensures that software correctly implements specific functions.

 Validation ensures that the software aligns with customer requirements.

 V&V includes various quality assurance activities, beyond just testing.

5. Verification vs. Validation:

 Verification: Focuses on "Are we building the product right?"

 Validation: Focuses on "Are we building the right product?"

19.1.2 Organizing for Software Testing

6. Conflict of Interest in Testing:

 There's a conflict of interest when testing begins because developers are asked
to test their own software.

 Developers are inclined to demonstrate that the software is error-free and


complies with customer requirements, which a ects thorough testing.

7. Psychological Perspective:

 Software analysis, design, and coding are constructive tasks, and developers
take pride in their work.

 Testing is perceived as "destructive" psychologically, as it involves uncovering


errors.
8. The Role of Independent Test Groups (ITG):

 ITG's role is to remove the conflict of interest inherent in having developers test
their own software.

 ITG personnel are dedicated to finding errors during testing.

9. Collaboration between Developers and ITG:

 Developers are responsible for unit testing and often perform integration testing.

 ITG is involved throughout the project, participating in analysis, design, planning,


and specifying test procedures.

10. Independence of ITG:

 In some cases, ITG may report to the software quality assurance organization to
maintain a degree of independence.

19.1.3 The Big Picture

11. Spiral Model of Software Testing:

 The software process can be visualized as a spiral model.

 System engineering defines the role of software, leading to software


requirements analysis.

 The spiral moves inward from system engineering to design and coding,
decreasing the level of abstraction with each turn.

12. Testing Strategy in the Spiral Model:

 A testing strategy can also be viewed within the context of the spiral model.

 Unit testing starts at the center (vortex) of the spiral, focusing on individual units.

 Testing progresses outward to integration testing, then to validation testing, and


finally to system testing.

 The testing scope broadens with each outward turn on the spiral.

19.1.4 Criteria for "Done"

13. When Testing Is Done:

 There is no definitive answer to when testing is "done."


 A common response is that testing never truly ends; it shifts from software
engineers to end users.

 Testing may be considered complete when time or budget constraints are


reached.

 Statistical quality assurance methods can provide guidance on when testing is


su icient.

19.2 Planning and Record Keeping

14. Incremental Testing Strategy:

 A recommended testing strategy falls between extremes, focusing on


incremental testing.

 Incremental testing involves unit testing, integration testing, validation testing,


and system testing.

15. Principles for E ective Testing:

 E ective software testing follows key principles, including specifying


requirements in a quantifiable manner, stating testing objectives, understanding
user profiles, emphasizing rapid cycle testing, building robust software, using
technical reviews, and applying a continuous improvement approach.

16. Agile Testing Approach:

 In agile development, the test plan is established before the first sprint meeting
and reviewed by stakeholders.

 Test cases and their directions are developed as code is implemented for user
stories.

 Testing results are shared with team members to allow for changes in both
existing and future code development.

19.2.1 Role of Sca olding

17. Unit Testing Framework:

 Unit testing focuses on individual components or modules of software.

 A testing framework for unit testing includes drivers (for test execution) and
stubs (to replace subordinate modules) when needed.
 Drivers accept test-case data, pass it to the component under test, and print
results.

 Stubs simulate subordinate modules by using their interfaces.

19.2.2 Cost-E ective Testing

18. Challenges in Exhaustive Testing:

 Exhaustive testing involves testing every possible combination of input values


and test-case orderings.

 In many cases, this is impractical and doesn't guarantee a bug-free component.

 Comprehensive testing is resource-intensive.

19. Minimizing Test Cases:

 Testers should focus on modules crucial to project success or those suspected


to be error-prone due to complexity.

 Techniques to minimize the number of test cases, discussed in Sections 19.4-


19.6, can be employed for e icient testing.

Section 19.3 - Test Case Design

1. Designing Unit Test Cases:

 Designing unit test cases before code development to ensure the code passes
the tests.

2. Key Aspects of Unit Testing:

 Testing the module interface to ensure proper data flow.

 Verifying local data structures to maintain data integrity.

 Exercising all independent paths through the control structure.

 Testing boundary conditions to ensure proper operation at limits.

 Testing all error-handling paths.

3. Importance of Data Flow Testing:

 Data flow across the component interface should be tested first.

 Local data structures should be exercised during unit testing.

4. Unique Test Design:

 Avoid redundancy by designing unique test cases that focus on uncovering new
errors.

5. Utilizing Requirements and Use Cases:

 Use requirements and use cases to guide the creation of test cases for
functional and nonfunctional requirements.
 User stories, acceptance criteria, and anti-requirements are valuable sources
for test case design.

6. Ensuring Traceability:

 Ensure that each test case can be traced back to specific functional or
nonfunctional requirements.

 Helps with auditability and consistency during testing.

Section 19.4 - White-Box Testing

1. Basis Path Testing:

 Basis path testing is a white-box testing technique.

 It derives test cases based on the control structure of the program.

 Independent paths are traversed, ensuring that all statements are executed.

2. Flow Graphs:

 Flow graphs are used to represent the program's control flow.

 Circles (nodes) represent program statements, and arrows (edges) show control
flow.

 Regions are bounded by edges and nodes.

3. Cyclomatic Complexity:

 Cyclomatic complexity is a metric that quantifies the logical complexity of a


program.

 It provides an upper bound for the number of independent paths and,


consequently, the number of tests needed.

 Cyclomatic complexity can be computed using multiple formulas (e.g., E - N + 2


or P + 1).

4. Using Cyclomatic Complexity:

 Cyclomatic complexity helps identify which components and operations are


likely to be more error-prone.

 Components with higher cyclomatic complexity values are prioritized for white-
box testing.

 It does not guarantee error detection but aids in focusing testing e orts.

5. Control Structure Testing:

 Besides basis path testing, control structure testing includes condition testing,
data flow testing, and loop testing.

 Loop testing focuses on both simple loops and nested loops, with specific tests
for di erent loop iterations.
6. Simple Loops Testing:

 Tests include skipping the loop, one pass, two passes, fewer passes, and more
passes than the loop's maximum.

7. Nested Loops Testing:

 To reduce the number of tests for nested loops, tests are conducted
incrementally from the innermost loop outward.

 This approach minimizes the exponential growth of test cases for deeply nested
loops.

Section 19.5 - Black-Box Testing

1. Introduction to Black-Box Testing:

 Black-box testing focuses on the functional requirements of the software.

 Complementary to white-box testing, it uncovers di erent types of errors.

 Aims to find errors in incorrect or missing functions, interface issues, data


structure errors, behavior or performance errors, and initialization/termination
errors.

2. Categories of Errors:

 Black-box testing is designed to find errors in several categories, including:

 Incorrect or missing functions.

 Interface errors.

 Errors in data structures or external database access.

 Behavior or performance errors.

 Initialization and termination errors.

3. Late-Stage Testing:

 Black-box testing is typically applied in later stages of testing.

 It focuses on the software's behavior and ignores control structure.

 Primarily concerned with the information domain.

4. Key Questions in Test Case Design:

 Black-box testing involves designing test cases to answer questions such as:

 How is functional validity tested?

 How are system behavior and performance tested?

 What input classes make good test cases?

 Which input values a ect the system's behavior?

 How are data class boundaries isolated?


 What data rates and volumes can the system handle?

 How do specific data combinations impact system operation?

5. Criteria for Test Case Design:

 Black-box testing aims to create test cases that:

 Reduce the number of additional test cases needed.

 Reveal the presence or absence of classes of errors, not just specific


errors.

19.5.1 - Interface Testing

1. Purpose of Interface Testing:

 Interface testing checks that a program component accepts information in the


proper order and data types.

 It ensures that the component returns information in the correct order and data
format.

2. Role in Integration Testing:

 Interface testing is often considered part of integration testing.

 Ensures that when a component is integrated into the larger program, it doesn't
break the build.

3. Stubs and Drivers:

 Stubs and drivers play a role in interface testing, using test cases for
components.

 They may incorporate test cases, debugging code, or checks for data passing
between components.

19.5.2 - Equivalence Partitioning

1. Equivalence Partitioning:

 Equivalence partitioning is a black-box testing method that divides the input


domain into classes of data.

 Test cases are derived from equivalence classes, focusing on valid and invalid
states.

2. Guidelines for Defining Equivalence Classes:

 Equivalence classes are defined based on input conditions, including ranges,


specific values, sets, and Boolean conditions.

3. Test Case Development:

 Test cases are designed to cover attributes of an equivalence class as


thoroughly as possible.
19.5.3 - Boundary Value Analysis

1. Importance of Boundary Value Analysis:

 BVA complements equivalence partitioning, as more errors tend to occur at


input boundaries.

 This technique selects test cases at the edges of input classes.

 It also considers output domain conditions.

2. Guidelines for Boundary Value Analysis:

 Test cases should be designed for boundary values, including values at and just
above/below boundaries.

 Guidelines apply to both input and output conditions.

 Internal data structures with defined boundaries should be tested at these


boundaries.

You might also like