We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24
Fault Based Testing, Planning
and Monitoring the Process,
Documenting Analysis and Test Presented by, Kavya HV Asst Professor Dept of MCA PESITM Fault Based Testing: • Fault based testing is a type of testing that involves identifying faults (or) defects in the Software by using test cases. • This type of testing is used to find defects in software and systems that could potentially cause problems, errors (or) failures when the software is use. Assumptions in Fault Based Testing: • First assumption is based on fact that all software are designed and developed by humans, and humans make mistakes. As a result, it is assumed that there will be faults (or) defects in the software that need to be identified and corrected. • Second assumption is that it is possible to create test cases that will trigger these faults (or) defects. • Third assumption is that by identifying and correcting these faults the overall quality of the software will improve. Mutation testing: • Mutation testing is a technique used to evaluate the quality of software by making small changes (mutations) to the source code and checking if the tests can detect these changes. • The main goal of mutation testing is to find bugs (or) faults in the code. Mutation Analysis: It is nothing but analyzing the source code my making small changes and testing whether the program is working correctly (or) not and whether the system is identifying the faults in the program after making changes in it. Example program: n = int(input("Enter a number: ")) rev = 0 while( n > 0 ): digit = n % 10 rev = rev * 10 + digit n = n // 10 print("Reverse of the number is: ",rev) After changing: n = int(input("Enter a number: ")) rev = 0 while( n > 0 ): digit = n % 10 rev = rev + 10 + digit #[Mutation of * to +] n = n // 10 print("Reverse of the number is: ",rev) Variations on mutation analysis: Strong mutation analysis: It kills mutants based on the outputs produced by execution of test cases it is known as strong mutation analysis. Weak mutation analysis: It kills mutants without waiting for the output means in between the intermediate state itself kill the mutants then it is called as weak mutation analysis. • Here will not wait for the output produced by executing the test cases. Fault Based Adequacy Criteria: Given a program and test suite T, mutation analysis consists of the following steps: Select Mutation operators: Mutation operators are: • Delete a statement • Duplicate a statement • Change a variable name with another • Replace True with False (or) vice versa • Exchange operators eg: + to * (or) > to >= Generate mutants: Mutants are generated mechanically by applying mutation operators to the original program. Distinguish mutants: Execute the original program and each generate mutant with the test cases in T. A mutant is killed when it can be distinguished from the original program. A mutant can remain live for two reasons: • When test suite T, doesn’t have sufficient test cases to distinguish mutants in the original program. • The mutants cannot be distinguished from the original program by test case. Scaffolding: It refers to design temporary code (or) structures which are basically meant for supporting development activities for testing the software. • It is separate code design by a tester, and it is used for supporting the testing activity and this process is called as scaffolding. • This scaffolding includes components like test harnesses, drivers and stubs. • Testing drivers is a software component (or) application which controls the execution of a program under test. • Test stubs are used for checking behavior (or) functionality of a software under test. • Test harness used to perform testing of software under various environment. Generic vs Specific Scaffolding: Generic scaffolding involves in creating reusable and general- purpose support structures that can be applied across multiple test cases or test suites. Specific scaffolding involves creating custom or specialized support structures to the specific requirements of individual test cases or test suites. Test Oracles: Test oracle is a mechanism used to determine whether the output of a system under test is correct or not. • A Software that applies a pass/fail criterion to a program execution is called a test oracle. Comparison based oracle: • This is used to compare the expected output with the actual output of the program. • Here the component called compare with in the comparison- based oracle is used to compare the expected output which is defined in the test case along with the actual output of the program. • Then based on the comparison it is going give the status whether it is pass (or) fail. • It is suitable for handwritten and for testing small test cases. Figure : A test harness with a comparison-based test oracle.
Self Checks as Oracle:
A program specification describes all correct program behaviors, so an oracle based on a specification need not be paired with a particular test case. Instead, the oracle can be incorporated into the program under test, so that it checks its own work. • Here a component called self checks will be used that is incorporated with in the program have written with the help of this self checks can get the result whether the test case is pass/fail. • It is suitable for large and automatically generated test cases.
Figure: self-checks are embedded in the program.
Quality and Process: • Quality refers to the software product that meets the requirements and expectations of its users. • It's about ensuring that the software behaves correctly, reliably, and efficiently, without bugs or defects. • Quality in software testing involves various aspects such as functionality, performance, usability, security, and reliability. • Process refers to the series of steps, methods, and techniques followed to ensure that testing activities are carried out effectively and efficiently. • It involves planning, designing, executing, and evaluating tests to verify and validate the software against its requirements. Capture and Replay: • In order to avoid the continuous monitoring process by the human being we are using the capture and replay method. • Capture is nothing but capturing the test cases which is running manually, and replay means re-execute those test cases automatically. Risk Planning: Risk is an inevitable part of every project, and so risk planning must be a part of every plan. Risks cannot be eliminated, but they can be assessed, controlled, and monitored. Risk planning is like preparing for challenges that could affect the testing process or the quality of the software being tested. It's a way to think ahead and come up with strategies to deal with these problems before they happen. Here's a simple breakdown of risk planning in software testing: 1. Identify Risks: First, we think about what could possibly go wrong during testing. This might include things like running out of time, not having enough people to do the testing, or finding critical bugs late in the process. 2. Analyze Risks: Next, we look at each problem and figure out how likely it is to happen and how bad it would be if it did. Some problems might be more serious than others, so we need to prioritize them. 3. Plan for Risks: Once we know what could go wrong and how bad it could be, we come up with ways to deal with each problem. This might involve setting aside extra time or resources, creating backup plans, or finding ways to reduce the chances of the problem happening in the first place. 4.Communicate: It's important to keep everyone involved in the testing process informed about the potential risks and our plans to deal with them. This way, everyone knows what to watch out for and what to do if something goes wrong. The Quality team: • The quality team in software testing, often referred to as the QA (Quality Assurance) team, plays a crucial role in ensuring that software products meet the required standards of quality, reliability, and functionality before they are released to customers. • The QA team is responsible for assessing and validating the software through various testing techniques and methodologies to identify defects, inconsistencies, and areas for improvement. Responsibility and functionality of Quality team: The QA team collaborates with stakeholders to develop test plans and strategies that outline the scope, objectives, and approach for testing the software. QA engineers design and execute test cases to verify that the software meets its specified requirements and functions correctly under various conditions. QA testers identify defects, bugs, and inconsistencies in the software through testing and analysis. QA engineers track and measure various quality metrics to assess the effectiveness of testing activities and the overall quality of the software. Analysis and Test Plan: While the format of an analysis and test strategy vary from company to company, the structure of an analysis and test plan is more standardized. • The overall quality plan usually comprises several individual plans of limited scope. • Each test and analysis plan should indicate the items to be verified through analysis or testing. • They may include specifications or documents to be inspected, interface specifications to undergo consistency analysis. They may refer to the whole system or part of it - like a subsystem or a set of units. Where the project plan includes planned development increments, the analysis and test plan indicates the applicable versions of items to be verified. Test design specification document: • Test design specification documents in software testing outline the detailed plan for executing testing activities, specifying how test cases will be designed, implemented, and executed to verify the functionality, performance, and other attributes of the software being tested. • These documents serve as a blueprint for testing and provide a structured framework for organizing and executing testing efforts. Components typically included in test design specification documents: Test Items: This section identifies the specific components or features of the software that will be tested. It may include a list of modules, functions, interfaces, or other elements targeted for testing. Test Techniques: This section describes the testing techniques and methodologies that will be used to design test cases. It may include techniques such as black-box testing, white-box testing, boundary value analysis, equivalence partitioning, and more. Test Case Specifications: This section provides detailed specifications for individual test cases. Each test case specification typically includes the following information: • Test Case ID • Test Case Description • Preconditions • Test Steps • Expected Results • Test Execution Status Test and analysis report: A test and analysis report in software testing is a document that summarizes the results of testing activities and provides insights into the quality, reliability, and performance of the software being tested. This report serves as a comprehensive record of testing efforts and communicates key findings, issues, and recommendations to stakeholders, including project managers, developers, and other relevant parties. Thank You