0% found this document useful (0 votes)
147 views

Software Testing Imp

Defect severity determines the criticality of a defect, while defect priority determines the urgency to fix a defect. There are four common combinations of severity and priority: 1) High severity and low priority defects can be fixed in future releases as they do not immediately impact functionality. 2) Low severity but high priority defects, like minor website issues, need quick fixes due to high traffic volumes. 3) High severity and high priority defects, like errors in weekly reports, require immediate fixes to prevent functionality issues. 4) Low severity and low priority defects, like minor spelling errors with low traffic, can be lower priorities to fix.

Uploaded by

Pinky Chanda
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
147 views

Software Testing Imp

Defect severity determines the criticality of a defect, while defect priority determines the urgency to fix a defect. There are four common combinations of severity and priority: 1) High severity and low priority defects can be fixed in future releases as they do not immediately impact functionality. 2) Low severity but high priority defects, like minor website issues, need quick fixes due to high traffic volumes. 3) High severity and high priority defects, like errors in weekly reports, require immediate fixes to prevent functionality issues. 4) Low severity and low priority defects, like minor spelling errors with low traffic, can be lower priorities to fix.

Uploaded by

Pinky Chanda
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 52

Defect severity determines the defect criticality whereas defect priority determines the defect immediacy or urgency of repair

1. High Severity & Low Priority: Suppose there is an application which generates some banking related reports weekly monthly quarterly & yearly by doing some calculations. In this application there is a fault while calculating YEARLY report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request. 2. Low Severity & High Priority : Suppose there is a spelling mistake or content issue on the homepage of BT.com website which has daily laks of hits all over UK. In this case though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault. 3. High Severity & High Priority : Suppose there is an application which gives some banking related reports weekly monthly quarterly & yearly by doing some calculations. In this application there is a fault while calculating WEEKLY report. This is a high severity and high priority fault because this fault will hamper the functionality of the application immediately within a week. It should be fixed urgently. 4. Low Severity & Low Priority : Suppose there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.

Software Testing An Introduction What is 'Software Testing'? What is the need of testing? Page 1 of 52

What are 5 common problems in the software development process? What are 5 common solutions to software development problems? Why does software have bugs? What is the solution for these problems? OR How can you prevent defects at development stage? What is the role of a "Tester"? What makes a good Software Test engineer? What makes a good Software QA engineer? Quality Assurance What is software quality? What is 'Software Quality Assurance'? What is 'good code'? What is 'good design'? How can new Software QA processes be introduced in an existing organization? Verification & Validation Verification Verification Techniques Validation Validation Techniques Comparison between Verification & Validation Testing Techniques & Types of Testing What are the different types of Testing? Please elaborate on each type. Equivalence Partitioning Boundary Value Analysis Cause-Effect Graphing Techniques Life Cycles & Models What is the 'Software development life cycle'? Or Explain SDLC Elaborate on Testing Life Cycle Bug Life Cycles or Defect life Cycle or what do you do after you find a defect. When to start testing OR Entry Criteria for testing When to stop testing or Exit Criteria V-Model Deliverables of Testing Test Strategy Test Plan Test Case

Page 2 of 52

Test Scenario Difference between Test Strategy & Test Plan Common Terminology What is Priority? What is Severity? Please elaborate. Traceability Matrix What is bug, defect, issue, error? Risk Analysis & Risk Identification Hot Fix What is Defect removable efficiency? Scenario based testing What is the different between Sanity & Smoke testing? What is the exact difference between a product and a project? Give an example? What if the software is so buggy it can't really be tested at all? What if there isn't enough time for thorough testing? What if the project isn't big enough to justify extensive testing? Will automated testing tools make testing easier? What's the best way to choose a test automation tool? Why is it often hard for organizations to get serious about quality assurance? Who is responsible for risk management? Who should decide when software is ready to be released? What can be done if requirements are changing continuously? What if the application has functionality that wasn't in the requirements? How can Software QA processes be implemented without reducing productivity? How can it be determined if a test environment is appropriate? What's the best approach to software test estimation? How can World Wide Web sites be tested? Processes & Standards What is 'configuration management'? What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

############################################## Section 1 - Software Testing An Introduction What is 'Software Testing'?

Page 3 of 52

Testing involves operation of a system or application under controlled (Simulated) conditions and evaluation of the results. Example: If the user is in interface A of the application and does B, then C should happen. The controlled conditions should include both normal and abnormal conditions (Positive & Negative). Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'Verification'. Why there is need of testing? OR Why there is a need of 'independent/separate testing'? Prior to the concept of Testing software as a Testing Project, the testing process existed, but the developer(s) did that at the time of development. The fact is that, if you make something, you hardly feel that there can be something wrong with what you have developed. It's a common trait of human nature, we feel that there is no problem in our designed system as we have developed it and it is perfectly functional and fully working. So the hidden bugs or errors or problems of the system remain hidden and they raise their head when the system goes into production. On the other hand, when one person starts checking something which is made by some other person, there are 99% chances that checker/observer will find some problem with the system , even if the problem is with some spelling that by mistake has been written in wrong way. Even though its wrong in terms of human behavior, this thing has been used for the benefit of software projects. When you develop something, you give it to get checked /tested and find out any problem, which never aroused during development of the system. If we minimize the problems with the system we developed, its beneficial for us. Our client will be happy if the system works without any problem and will generate more revenues for us. That's why we need Testing! What are 5 common problems in the software development process? Poor requirements - If requirements are unclear, incomplete, too general, and not testable, there will be problems. Unrealistic schedule - If too much work is crammed in too little time, problems are inevitable. Inadequate testing - No one will know whether or not the program is any good until the customer complains or systems crash. Featurisms - Requests to pile on new features after development is underway; extremely common. Miscommunication - If developers don't know what's needed or customers have erroneous expectations, problems can be expected.

Page 4 of 52

What are 5 common solutions to software development problems? Solid requirements - Clear, complete, detailed, attainable, testable requirements that are agreed to by all players. Continuous close coordination with customers/end-users is necessary to ensure that changing/emerging requirements are understood. Realistic schedules - Adequate time for planning, design, testing, bug fixing, retesting, changes, and documentation should be provided. Adequate testing - Start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. 'Early' testing could include static code analysis/testing, unit testing by developers, automated post-build testing, etc. Stick to initial requirements where feasible - Defend against excessive changes and additions once development has begun, and explain consequences to end users about constant change. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, work closely with customers/end-users to manage expectations. Communication - Conduct walkthroughs and inspections when appropriate; make extensive use of group communication tools - groupware, bug-tracking tools and change management tools, etc to ensure that information/documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes and/or continuous communication with end-users if possible to clarify expectations. Why does software have bugs? Miscommunication - The requirements are not clearly explained by the user or are misunderstood by the developer leading to Gaps. Software complexity - The complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tiered applications, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. Programming errors - Programmers are bound to make mistakes while coding causing defects. Changing requirements - If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Time pressures - Scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made. Poorly documented code - It's tough to maintain and modify code that is badly written or poorly documented; the result is bugs.

Page 5 of 52

Software development tools - Visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

What is the solution for these problems? OR How can you prevent defects at development stage? Solid requirements Requirements should be clear, complete, detailed, cohesive, attainable and testable. Use prototypes to help nail down requirements. In 'agile'-type environments, continuous coordination with customers/end-users is necessary. Realistic schedules - Allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out. Adequate testing - Start testing early on, re-tests after fixes or changes, and plan for adequate time for testing and bug-fixing. 'Early' testing ideally includes unit testing by developers and built-in testing and diagnostic capabilities. Stick to initial requirements as much as possible - Be prepared to defend against excessive changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, work closely with customers/end-users to manage expectations. This will provide them a higher comfort level with their requirements decisions and minimize excessive changes later on. Communication - Conduct walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc. Insure that information/documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes if possible to clarify customers' expectations.

What is the role of a "Tester"? A Tester's focus is to demonstrate an application's weaknesses. It is to find test cases or configurations that would give unexpected results, or to show the software breaking. Planning and developing test cases Writing test plans and documentation, prioritizing the testing based on assessing the risks, setting up test data, organizing test teams. Setting up the test environment An application will be tested using multiple combinations of hardware and software and under different conditions. Setting up the prerequisites for the test cases (Test Data) is the task of testers.

Page 6 of 52

Writing test harnesses and scripts Developing test applications to call the API directly in order to automate the test cases. Writing scripts to simulate user interactions. Planning, writing and running load tests Non-functional tests to monitor an application's scalability and performance. Looking at how an application behaves under the stress of a large number of users. Writing bug reports Communicating the exact steps required to reproduce unexpected behavior on a particular configuration. Reporting to development team with test results. What makes a good test engineer? A good test engineer should have A test to break attitude, An ability to take the point of view of the customer, A strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers. An ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers point of view, and reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited. What makes a good Software QA engineer? The same qualities a good tester has are useful for a QA engineer. Additionally: They must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see whats missing is important for inspections and reviews. ### Section 1 - Software Testing An Introduction Ends Here ### Section 2 Quality Assurance What is 'Software Quality Assurance'? Page 7 of 52

Software Quality Assurance involves the entire software development process monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'Prevention'. What is software quality? Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, and the development organizations. Management/Users/Testers, Each type of 'customer' will have their own view on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free. What is 'good code'? 'Good code' is code that works, is bug free, and is readable and maintainable. Some organizations have coding 'standards' that all developers are supposed to adhere to, but everyone has different ideas about what's best, or what is too many or too few rules. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and enforce standards. What is 'good design'? 'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient errorhandling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. For programs that have a user interface, it's often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the on-line help; some common rules-of-thumb include: The program should act in a way that least surprises the user It should always be evident to the user what can be done next and how to exit The program shouldn't let the users do something stupid without warning them. How can new Software QA processes be introduced in an existing organization?

Page 8 of 52

A lot depends on the size of the organization and the risks involved. For large organizations with high-risk projects, serious management buy-in is required and a formalized QA process is necessary. Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand. For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.

### Section 2 Quality Assurance Ends Here ### Section 3 - Verification & Validation

Page 9 of 52

What is Verification? The standard definition of Verification is Are we building the product RIGHT?" i.e. Verification is a process that makes it sure that the software product is being developed in the right way. The software should confirm to its predefined specifications. As the product development goes through different stages, an analysis is done to ensure that all required specifications are met. The Verification part of Verification and Validation Model comes before Validation, which incorporates Software inspections, reviews, audits, walkthroughs, buddy checks etc. During the Verification, the work product (the ready part of the Software being developed and various documentations) is reviewed/examined personally by one ore more persons in order to find and point out the defects in it. This process helps in prevention of potential bugs, which may cause in failure of the project. Few terms involved in Verification: Inspection: Inspection involves a team of about 3-6 people, led by a leader, which formally reviews the documents and work product during various phases of the product development life cycle. The work product and related documents are presented in front of the inspection team, the members of which carry different interpretations of the presentation. The bugs that are detected during the inspection are communicated to the next level in order to take care of them. Walkthroughs: Walkthrough can be considered same as inspection without formal preparation (of any presentation or documentations). During the walkthrough meeting, the presenter/author introduces the material to all the participants in order to make them familiar with it. Even when the walkthroughs can help in finding potential bugs, they are used for knowledge sharing or communication purpose. Buddy Checks: This is the simplest type of review activity used to find out bugs in a work product during the verification. In buddy check, one person goes through the documents prepared by another person in order to find out if that person has made mistake(s) i.e. to find out bugs which the author couldnt find previously. The activities involved in Verification process are: Requirement Specification verification, Functional design verification, internal/system design verification and code verification. Each activity makes sure that the product is developed right way and every requirement, every specification, design code etc. is verified. What is Validation?

Page 10 of 52

The standard definition of Validation is: "Are we building the RIGHT product" i.e. whatever the software product is being developed; it should do what the user expects it to do. The software product should satisfy all the functional requirements set by the user. Validation is done during or at the end of the development process in order to determine whether the product satisfies specified requirements. Validation and Verification processes go hand in hand, but visibly Validation process starts after Verification process ends (after coding of the product ends). Each Verification activity (such as Requirement Specification Verification, Functional design Verification etc.) has its corresponding Validation activity (such as Functional Validation/Testing, Code Validation/Testing, System/Integration Validation etc.). All types of testing methods are basically carried out during the Validation process. Test plan, test suits and test cases are developed, which are used during the various phases of Validation process. The activities involved in Validation process are as follows: Code Validation/Testing: Developers as well as testers do the code validation. Unit Code Validation or Unit Testing is a type of testing, which the developers conduct in order to find out any bug in the code unit/module developed by them. Code testing other than Unit Testing can be done by testers or developers. Integration Validation/Testing: Integration testing is carried out in order to find out if different (two or more) units/modules co-ordinate properly. This test helps in finding out if there is any defect in the interface between different modules. Functional Validation/Testing: This type of testing is carried out in order to find if the system meets the functional requirements. In this type of testing, the system is validated for its functional behavior. Functional testing does not deal with internal coding of the project, in stead, it checks if the system behaves as per the expectations. User Acceptance Testing or System Validation: In this type of testing, the developed product is handed over to the user/paid testers in order to test it in real time scenario. The product is validated to find out if it works according to the system specifications and satisfies all the user requirements. As the user/paid testers use the software, it may happen that bugs that are yet undiscovered, come up, which are communicated to the developers to be fixed. This helps in improvement of the final product.

Page 11 of 52

Comparison between Verification & Validation


Validation Am I building the right product Determining if the system complies with the requirements and performs functions for which it is intended and meets the organizations goals and user needs. It is traditional and is performed at the end of the project. Am I accessing the right data (in terms of the data required to satisfy the requirement) High level activity Performed after a work product is produced against established criteria ensuring that the product integrates correctly into the environment Determination of correctness of the final software product by a development project with respect to the user needs and requirements Verification Am I building the product right The review of interim work steps and interim deliverables during a project to ensure they are acceptable. To determine if the system is consistent, adheres to standards, uses reliable techniques and prudent practices, and performs the selected functions in the correct manner. Am I accessing the data right (in the right place; in the right way) Low level activity Performed during development on key artifacts, like walkthroughs, reviews and inspections, mentor feedback, training, checklists and standards Demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.

### Section 3 - Verification & Validation Ends Here ### Section 4 - Testing Techniques & Types of Testing

Page 12 of 52

What are the different types of Testing? Please elaborate on each type.

Black box testing A testing technique where the internal code of the application being tested are not known by the tester. In a black box test, the tester only knows the valid inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications. Various testing types that fall under the Black Box Testing strategy are: Functional testing, Regression Testing, System testing, Interface testing, User Acceptance Testing, Sanity testing, Smoke testing, Load testing, Volume testing, Stress testing, Usability testing, Ad-hoc testing, Exploratory testing, Recovery testing, Alpha testing, Beta testing etc. Functional testing - This type of testing is carried out in order to find if the system meets the functional requirements. In this type of testing, the system is validated for its functional behavior. Functional testing does not deal with internal coding of the project, in stead, it checks if the system behaves as per the expectations. Regression testing - Testing after fixes or modifications of the software is called regression testing. When a defect is fixed or some new functionality is implemented, impact analysis is done and Test cases are re-executed in order to check whether previous functionality of the application is working fine and new changes have not introduced any new bugs. Automated testing tools can be especially useful for this type of testing. System testing System testing is performed on the entire system in the context of a Functional Requirement Specification(s) (FRS) and/or a System Requirement Specification (SRS). System testing is an investigatory testing phase, where the focus is to have almost a destructive attitude and test not only the design, but also the behavior and believed expectations of the customer. Integration testing - Testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. Acceptance testing - The developed product is handed over to the user/paid testers in order to test it in real time scenario. The product is validated to find out if it works according to the system specifications and satisfies all the user requirements. Smoke Testing - Smoke testing is done by developers before the build is released or by testers before accepting a build for further testing. This is also known as a build verification test. Sanity Testing - Sanity test tests the basic functions of an application to determine whether the application logic is generally functional and correct (for example, an interest Page 13 of 52

rate calculation for a financial application). If the sanity test fails, it is not reasonable to attempt more rigorous testing. Load testing - The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades. Load testing tests the expected usage of a software program by simulating multiple users accessing the program's services concurrently. Volume Testing - Volume testing refers to testing a software application for a certain data volume. This volume can in generic terms be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will explode your database to that size and then test the application's performance on it. Stress Testing - Stress testing often refers to tests that put a greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. In particular, the goals of such tests may be to ensure the software doesn't crash in conditions of insufficient computational resources (such as memory or disk space), unusually high concurrency, or denial of service attacks. Examples: A web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads. Usability Testing Testing of a system against the Usability Guidelines set by the customer or testing the system for its User Friendliness. The common points under usability are How easy is it for users to accomplish basic tasks the first time they encounter the design? Efficiency: Once users have learned the design, how quickly can they perform tasks? Memorability: When users return to the design after a period of not using it, how easily can they re establish proficiency? Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors? Satisfaction: How pleasant is it to use the design? Ad-hoc Testing - Ad hoc testing is a commonly used term for software testing performed without planning and documentation. The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is a part of exploratory testing, being the least formal of test methods. In this view, ad hoc testing has been criticized because it isn't structured, but this can also be strength: important things can be found quickly. It is performed with improvisation; the tester seeks to find bugs with any means that seem appropriate. Exploratory Testing - This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.

Page 14 of 52

Recovery Testing - Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. Alpha Testing - In-house developers often test the software in what is known as 'ALPHA' testing which is often performed under a debugger or with hardware-assisted debugging to catch bugs quickly. It can then be handed over to testing staff for additional inspection in an environment similar to how it was intended to be used. Beta Testing - The software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers. White Box Testing Includes White box testing It is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. Unit testing - The most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. Other Types of testing include Install/uninstall testing - Testing of full, partial, or upgrade install/uninstall processes. Security testing - Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques. Compatibility testing - Testing how well software performs in a particular Hardware/software/operating system/network/etc. environment. Context-driven testing - Testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game. Mutation testing - A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources. Incremental Integration Testing - Continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be

Page 15 of 52

independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. Testing techniques

Equivalence Partitioning This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions. Equivalence classes may be defined according to the following guidelines: 1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined. 2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined. 3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined. Ex. a. Set A= {1, 4, 5, 7, 9, 12} implies that the elements or members of set A are 1, 4, 5,
7, 9, and 12.

b.

Which of the following is a member of the set {A, C, D, F, G, R, T, W, Y}? Choices: A. R B. B C. P D. Z Correct Answer: A Solution: Step 1: Any element of a set is a member of that set. Step 2: Here, only R is a member of the given set. {A, C, D, F, G, R, T, W, Y} as R is contained in the set.

4. If an input condition is Boolean, then one valid and one invalid equivalence class are defined. (Boolean means true or false) Boundary Value Analysis This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:

Page 16 of 52

1. For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively. 2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits. 3. Apply guidelines 1 and 2 to the output. 4. If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.

Cause-Effect Graphing Techniques Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps: 1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each. 2. A cause-effect graph is developed. 3. The graph is converted to a decision table. 4. Decision table rules are converted to test cases. Does every software project need testers? While all projects will benefit from testing, some projects may not require independent test staff to succeed. Which projects may not need independent test staff? The answer depends on the size and context of the project, the risks, the development methodology, the skill and experience of the developers, and other factors. For instance, if the project is a short-term, small, low risk project, with highly experienced programmers utilizing thorough unit testing or testfirst development, then test engineers may not be required for the project to succeed. In some cases an IT organization may be too small or new to have a testing staff even if the situation calls for it. In these circumstances it may be appropriate to instead use contractors or outsourcing, or adjust the project management and development approach (by switching to more senior developers and agile test-first development, for example). Inexperienced managers sometimes gamble on the success of a project by skipping thorough testing or having programmers do post-development functional testing of their own work, a decidedly high risk gamble. For non-trivial-size projects or projects with non-trivial risks, a testing staff is usually necessary. As in any business, the use of personnel with specialized skills enhances an organization's ability to be successful in large, complex, or difficult tasks. It allows for both a) deeper and stronger skills and b) the contribution of differing perspectives. For example, programmers typically have the perspective of 'what are the technical issues in making this functionality work? A test engineer typically has the perspective of 'what

Page 17 of 52

might go wrong with this functionality, and how can we ensure it meets expectations? Technical people who can be highly effective in approaching tasks from both of those perspectives are rare, which is why, sooner or later, organizations bring in test specialists.

### Section 4 - Testing Techniques & Types of Testing Ends Here ### Section 5 - Life Cycles & Models What is Software Development Life Cycle? OR Elaborate on SDLC The Systems Development Life Cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project from an initial feasibility study through maintenance of the completed application. Various SDLC methodologies have been developed to guide the processes involved including the waterfall model (the original SDLC method), rapid application development (RAD), joint application development (JAD), the fountain model and the spiral model. The image below is the classic Waterfall model methodology, which is the first SDLC method and it describes the various phases involved in development.

Briefly on different Phases:

Page 18 of 52

Feasibility The feasibility study is used to determine if the project should get the go-ahead. If the project is to proceed, the feasibility study will produce a project plan and budget estimates for the future stages of development. Requirement Analysis and Design Analysis gathers the requirements for the system. This stage includes a detailed study of the business needs of the organization. Options for changing the business process may be considered. Design focuses on high level design like, what programs are needed and how are they going to interact, low-level design (how the individual programs are going to work), interface design (what are the interfaces going to look like) and data design (what data will be required). During these phases, the software's overall structure is defined. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase. Implementation In this phase the designs are translated into code. Computer programs are written using a conventional programming language or an application generator. Programming tools like Compilers, Interpreters, and Debuggers are used to generate the code. Different high level programming languages like C, C++, Pascal, and Java are used for coding. With respect to the type of application, the right programming language is chosen. Testing In this phase the system is tested. Normally programs are written as a series of individual modules, this subject to separate and detailed test. The system is then tested as a whole. The separate modules are brought together and tested as a complete system. The system is tested to ensure that interfaces between modules work (integration testing), the system works on the intended platform and with the expected volume of data (volume testing) and that the system does what the user requires (acceptance/beta testing). Maintenance Inevitably the system will need maintenance. Software will definitely undergo change once it is delivered to the customer. There are many reasons for the change. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period.

Page 19 of 52

Elaborate on Testing Life Cycle Software testing life cycle identifies what test activities to carry out and when to accomplish those test activities. Even though testing differs between organizations, there is a testing life cycle. Software Testing Life Cycle consists of six (generic) phases: Test Planning, Test Analysis, Test Design, Construction and verification, Testing Cycles, Final Testing and Implementation and Post Implementation. Software testing has its own life cycle that intersects with every stage of the SDLC. The basic requirements in software testing life cycle is to control/deal with software testing Manual, Automated and Performance. Test Planning This is the phase where Project Manager has to decide what things need to be tested, do I have the appropriate budget etc. Naturally proper planning at this stage would greatly reduce the risk of low quality software. This planning will be an ongoing process with no end point. Activities at this stage would include preparation of high level test plan-(according to IEEE test plan template The Software Test Plan (STP) is designed to prescribe the scope, approach, resources, and schedule of all testing activities. The plan must identify the items to be tested, the features to be tested, the types of testing to be performed, the personnel responsible for testing, the resources and schedule required to complete testing, and the risks associated with the plan.). Almost all of the activities done during this stage are included in this software test plan and revolve around a test plan.

Page 20 of 52

Test Analysis Once test plan is made and decided upon, next step is to delve little more into the project and decide what types of testing should be carried out at different stages of SDLC, do we need or plan to automate, if yes then when the appropriate time to automate is, what type of specific documentation I need for testing. Proper and regular meetings should be held between testing teams, project managers, and development teams, Business Analysts to check the progress of things which will give a fair idea of the movement of the project and ensure the completeness of the test plan created in the planning phase, which will further help in enhancing the right testing strategy created earlier. We will start creating test case formats and test cases itself. In this stage we need to develop Functional validation matrix based on Business Requirements to ensure that all system requirements are covered by one or more test cases, identify which test cases to automate, begin review of documentation, i.e. Functional Design, Business Requirements, Product Specifications, Product Externals etc. We also have to define areas for Stress and Performance testing. Test Design Test plans and cases which were developed in the analysis phase are revised. Functional validation matrix is also revised and finalized. In this stage risk assessment criteria is developed. If you have thought of automation then you have to select which test cases to automate and begin writing scripts for them. Test data is prepared. Standards for unit testing and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) & finalized and test environment is prepared. Construction and verification In this phase we have to complete all the test plans, test cases, complete the scripting of the automated test cases, Stress and Performance testing plans needs to be completed. We have to support the development team in their unit testing phase. And obviously bug reporting would be done as when the bugs are found. Integration tests are performed and errors (if any) are reported. Testing Cycles In this phase we have to complete testing cycles until test cases are executed without errors or a predefined condition is reached. Run test cases --> Report Bugs --> revise test cases (if needed) --> add new test cases (if needed) --> bug fixing --> retesting (test cycle 2, test cycle 3.). Final Testing and Implementation In this we have to execute remaining stress and performance test cases, documentation for testing is completed / updated, provide and complete different matrices for testing. Acceptance, load and recovery testing will also be conducted and the application needs to be verified under production conditions. Post Implementation

Page 21 of 52

In this phase, the testing process is evaluated and lessons learnt from that testing process are documented. Line of attack to prevent similar problems in future projects is identified. Create plans to improve the processes. The recording of new errors and enhancements is an ongoing process. Cleaning up of test environment is done and test machines are restored to base lines in this stage. The below table describes the STLC in brief.

Software Testing Life Cycle Phase Planning Analysis Activities Create high level test plan Create detailed test plan, Functional Validation Matrix, test cases Outcome Test plan, Refined Specification Revised Test Plan, Functional validation matrix, test cases

Design

revised test cases, test data test cases are revised; select sets, sets, risk assessment which test cases to automate sheet scripting of test cases to automate, complete testing cycles Test procedures/Scripts, Drivers, test results, Bug reports. Test results, Bug Reports

Construction Testing cycles Final testing Post implementation

execute remaining stress and Test results and different performance tests, complete metrics on test efforts documentation Evaluate testing processes Plan for improvement of testing process

Page 22 of 52

Bug Life Cycles or Defect life Cycle or what do you do after you find a defect.

Page 23 of 52

Page 24 of 52

What steps are needed to develop and run software tests? OR What are the Entry Criteria for testing? The following are some of the steps to consider: Obtain requirements, functional design, and internal design specifications and other available/necessary information Obtain budget and schedule requirements Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.) Identify application's higher-risk and more important aspects, set priorities, and determine scope and limitations of tests. Determine test approaches and methods - unit, integration, functional, system, security, load, usability tests, etc. Determine test environment requirements (hardware, software, configuration, versions, communications, etc.) Determine test ware requirements (automation tools, coverage analyzers, test tracking, problem/bug tracking, etc.) Determine test input data requirements Identify tasks, those responsible for tasks, and labor requirements Set schedule estimates, timelines, milestones Prepare test plan document(s) and have needed reviews/approvals Write test cases Have needed reviews/inspections/approvals of test cases Prepare test environment and test ware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data Obtain and install software releases Perform tests Evaluate and report results Track problems/bugs and fixes Retest as needed Exit Criteria for testing OR When to Stop Testing This can be difficult to determine. Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done Common factors in deciding when to stop are: Deadlines (release deadlines, testing deadlines.) Test cases completed with certain percentages passed Test budget depleted Coverage of code/functionality/requirements reaches a specified point

Page 25 of 52

The rate at which Bugs can be found is too small Beta or Alpha Testing period ends V Model The V-model is a software development process which demonstrates the relationships between each phase of the development life cycle and its associated phase of testing.

The V Model

Page 26 of 52

Requirements analysis In this phase, the requirements of the proposed system are collected by analyzing the needs of the user(s). This phase is concerned about establishing what the required system has to perform. However, it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated. The user requirements document will typically describe the systems functional, physical, interface, performance, data, security requirements etc as expected by the user. It is one which the business analysts use to communicate their understanding of the system back to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance tests are designed in this phase. System Design System engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirements can be implemented. If any of the requirements are not feasible, the user is informed of the issue. A resolution is found and the user requirement document is edited accordingly. The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menu structures, data structures etc. Other technical documentation like entity diagrams, data dictionary will also be produced in this phase. The documents for system testing are prepared in this phase. Architecture Design This phase can also be called as high-level design. The baseline in selecting the architecture is that it should realize all which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The integration testing design is carried out in this phase. Module Design This phase can also be called as low-level design. The designed system is broken up in to smaller units or modules and each of them is explained so that the programmer can start coding directly. The low level design document or program specifications will contain a detailed functional logic of the module, in pseudo code - database tables, with all elements, including their type and size - all interface details with complete API references- all dependency issues- error message listings- complete input and outputs for a module. The unit test design is developed in this stage.

Page 27 of 52

Coding The actual coding of application is done.

Unit Testing In the V-model of software development, unit testing implies the first stage of dynamic testing process. It involves analysis of the written code with the intention of eliminating errors. It also verifies that the codes are efficient and adheres to the adopted coding standards. Testing is usually white box. It is done using the Unit test design prepared during the module design phase. This may be carried out by software testers, software developers or both. Integration Testing In integration testing the separate modules will be tested together to expose faults in the interfaces and in the interaction between integrated components. Testing is usually black box as the code is not directly checked for errors. It is done using the integration test design prepared during the architecture design phase. Integration testing is generally conducted by software testers. System Testing System testing will compare the system specifications against the actual system. The system test design is derived from the system design documents and is used in this phase. Sometimes system testing is automated using testing tools. Once all the modules are integrated several errors may arise. Testing done at this stage is called system testing. User Acceptance Testing The software is tested in the "real world" by the actual end users. Acceptance testing is performed to determine whether a system satisfies its acceptance criteria or not. It enables the customer to determine whether to accept the system or not. Benefits of V-model The V-model deploys a well-structured method in which each phase can be implemented by the detailed documentation of the previous phase. Testing activities like test designing start at the beginning of the project well before coding and therefore saves a huge amount of the project time. ### Section 5 - Life Cycles & Models Ends Here ###

Page 28 of 52

Section 6 - Deliverables of Testing

What is a Test Strategy? The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, test environment, a list of related tasks, pass/fail criteria and risk assessment. Inputs for this process A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data. A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules. Testing methodology. This is based on known standards. Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents. Requirements that the system can not provide, e.g. system limitations.

Outputs for this process: An approved and signed off test strategy document, test plan, including test cases. Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

Page 29 of 52

What is a 'Test plan'? A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project: Title Identification of software including version/release numbers Revision history of document including authors, dates, approvals Table of Contents Purpose of document, intended audience Objective of testing effort Software product overview Relevant related document list, such as requirements, design documents, other test plans, etc. Relevant standards or legal requirements Traceability requirements Relevant naming conventions and identifier conventions Overall software project organization and personnel/contact-info/responsibilities Test organization and personnel/contact-info/responsibilities Assumptions and dependencies Project risk analysis Testing priorities and focus Scope and limitations of testing Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable Outline of data input equivalence classes, boundary value analysis, error classes Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems Test environment setup and configuration issues Software migration processes Software Change Management processes Test data setup requirements Database setup requirements Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs Test automation - justification and overview Test tools to be used, including versions, patches, etc.

Page 30 of 52

Test script/test code maintenance processes and version control Problem tracking and resolution - tools and processes Project test metrics to be used Reporting requirements and testing deliverables Software entrance and exit criteria Initial sanity testing period and criteria Test suspension and restart criteria Personnel allocation Personnel pre-training needs Test site/location Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues Relevant proprietary, classified, security and licensing issues. Open issues Appendix - glossary, acronyms, etc. What is a Test case? A test case describes an input, action, or event and an expected response, to determine if a feature of a software application is working correctly. A test case may contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. The level of detail may vary significantly depending on the organization and project context. The process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible. What is a Test Scenario? The terms "test scenario" and "test case" are often used synonymously. Test scenarios are test cases or test scripts, and the sequence in which they are to be executed. Test scenarios are test cases that ensure that all business process flows are tested from end to end. Test scenarios are independent tests, or a series of tests that follow each other, where each of them dependent upon the output of the previous one. Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures. Test scenarios are designed to represent both typical and unusual situations that may occur in the application. Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios. It is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing. Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary

Page 31 of 52

to perform one or more test scenarios. Test procedures or scripts may cover multiple test scenarios.

Page 32 of 52

Difference between Test Strategy & Test Plan

Page 33 of 52

### Section 6 - Deliverables of Testing Ends Here ###

Page 34 of 52

Section 7 - Common Terminology What is Priority? What is Severity? Please elaborate. Priority - Priority is the order in which developer has to fix the bug. The available priorities range from P1 (most important) to P5 (least important). Severity - Severity is how seriously the bug is impacting the application. Show Stopper blocks development and/or testing work Critical crashes, loss of data, severe memory leak Major loss of function Normal regular issue, some loss of functionality under specific circumstances Minor loss of function, or other problem where easy workaround is present Trivial cosmetic problem like misspelled words or misaligned text

Priority and Severity Examples High Priority & High Severity: A show stopper error which occurs on the basic functionality of the application. (E.g. A site maintains student details, on saving record if it, doesn't allow to save the record then this is high priority and high severity bug.) High Priority & Low Severity: The spell mistakes that happens on the cover page or heading or title of an application. High Severity & Low Priority: The application generates a show stopper for a link/ Report which is rarely used by the end user. Low Priority and Low Severity: Any cosmetic or spell issues which is with in a paragraph or in the report (Not on cover page, heading, title).

Page 35 of 52

Traceability matrix A traceability matrix is created by associating requirements with the test cases / scenarios that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement.

Traceability requires unique identifiers for each requirement and product. Numbers for products are established in a configuration management (CM) plan. Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used to manage change and provides the basis for test planning. Sample Traceability Matrix A traceability matrix is a report from the requirements database or repository. What information the report contains depends on your need. Information requirements determine the associated information that you store with the requirements. Requirements management tools capture associated information or provide the capability to add it. The examples show forward and backward tracing between user and system requirements. User requirement identifiers begin with "U" and system requirements with "S." Tracing S12 to its source makes it clear this requirement is erroneous: it must be eliminated, rewritten, or the traceability corrected.

Page 36 of 52

Unique No.

Requirement

Source of Requirement

Software Reqs. Spec / Functional Req. Doc.

Design Spec.

Program Module

Test Spec.

Test Case(s)

Successful Test Verification

Modification of Requirement

Remarks

Objective 1:

Description of Matrix Fields Develop a matrix to trace the requirements back to the project objectives identified in the Project Plan and forward through the remainder of the project life cycle stages. Place a copy of the matrix in the Project File. Expand the matrix in each stage to show traceability of work products to the requirements and vice versa. The requirements traceability matrix should contain the following fields: A unique identification number containing the general category of the requirement (e.g., SYSADM) and a number assigned in ascending order (e.g., 1.0; 1.1; 1.2). The requirement statement. Requirement source (Conference; Configuration Control Board; Task Assignment, etc.). Software Requirements Specification/Functional Requirements Document paragraph number containing the requirement. Design Specification paragraph number containing the requirement. Program Module containing the requirement. Test Specification containing the requirement test. Test Case number(s) where requirement is to be tested (optional). Verification of successful testing of requirements. Modification field. If requirement was changed, eliminated, or replaced, indicate disposition and authority for modification. Remarks.

Page 37 of 52

What is bug, defect, issue, error? Bug - It is a fault in a program which causes the program to perform in an unintended or unanticipated manner. Bug is a terminology which is used by Test engineers Defect - Nonconformance to requirements or functional / program specification Defect is a terminology which is used by programmers Issue: An issue is a major problem that will impede the progress of the project and cannot be resolved by the project manager and project team without outside help Error: Error is the deviation of a measurement, observation, or calculation from the truth

Page 38 of 52

Risk Analysis: A risk is a potential for loss or damage to an Organization from materialized threats. Risk Analysis attempts to identify all the risks and then quantify the severity of the risks. A threat as we have seen is a possible damaging event. If it occurs, it exploits vulnerability in the security of a computer based system. Risk Identification: 1. Software Risks: Knowledge of the most common risks associated with Software development, and the platform you are working on. 2. Business Risks: Most common risks associated with the business using the Software 3. Testing Risks: Knowledge of the most common risks associated with Software Testing for the platform you are working on, tools being used, and test methods being applied. 4. Premature Release Risk: Ability to determine the risk associated with releasing unsatisfactory or untested Software Products. 5. Risk Methods: Strategies and approaches for identifying risks or problems associated with implementing and operating information technology, products and process; assessing their likelihood, and initiating strategies to test those risks. What is hot fix? A hot fix is a single, cumulative package that includes one or more files that are used to address a problem in a software product. Typically, hot fixes are made to address a specific customer situation and may not be distributed outside the customer organization. Hot fixes are generally provided for bugs found at the customer place which has high priority. What is Defect removable efficiency? The DRE is the percentage of defects that have been removed during an activity, computed with the equation below. The DRE can also be computed for each software development activity and plotted on a bar graph to show the relative defect removal efficiencies for each activity. Or, the DRE may be computed for a specific task or technique (e.g. design inspection, code walkthrough, unit test, 6 month operation, etc.) DRE = Number Defects Removed/ Number Defects at Start of Process*100 DRE=A/A+B = 0.8 A = Testing Team (Defects by testing team) B = Customer (Defects by customer) If DRE <=0.8 then good product otherwise not. ### Section 7 - Common Terminology Ends Here ###

Page 39 of 52

Section 8 Scenario Based Testing What is the Difference between Sanity testing and Smoke testing? A smoke test determines whether it is possible to continue testing, as opposed to whether it is reasonable. A software smoke test determines whether the program launches and whether its interfaces are accessible and responsible (for example, the responsiveness of a web page or an input button). If the smoke test fails, it is impossible to conduct a sanity test. Sanity test exercises the smallest subset of application functions needed to determine whether the application logic is generally functional and correct (for example, an interest rate calculation for a financial application). If the sanity test fails, it is not reasonable to attempt more rigorous testing. Both sanity tests and smoke tests are ways to avoid wasting time and effort by quickly determining whether an application is too flawed to merit any rigorous testing. Many companies run sanity tests on a weekly build as part of their development process. What is the exact difference between a product and a project? Give an example. A Project is developed for particular client and the requirements are defined by that client. A Product is developed for market Requirements are defined by company itself by conducting market survey Example A simple example would be as follows Project: The shirt which we are interested stitching with tailor as per our specifications is project Product: Example is Ready made Shirt where the particular company will imagine particular measurements they made the product Mainframes is a product Product has many more versions but project has fewer versions i.e. depends upon change request and enhancements What if the software is so buggy it can't really be tested at all? The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem.

Page 40 of 52

What if there isn't enough time for thorough testing? Use risk analysis, along with discussion with project stakeholders, to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) The following points can be considered: Which functionality is most important to the project's intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application? What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio?

What if the project isn't big enough to justify extensive testing? Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is needed to prioritize the areas of testing. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis. Will automated testing tools make testing easier? For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable. A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The

Page 41 of 52

'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very timeconsuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms. Another common type of approach for automation of functional testing is 'data-driven' or 'keyword-driven' automated testing, in which the test drivers are separated from the data and/or actions utilized in testing (an 'action' would be something like 'enter a value in a text box'). Test drivers can be in the form of automated test tools or custom-written testing software. The data and actions can be more easily maintained - such as via a spreadsheet - since they are separate from the test drivers. The test drivers 'read' the data/action information to perform specified tests. This approach can enable more efficient control, development, documentation, and maintenance of automated tests/test cases. Other automated tools can include: Code analyzers - monitor code complexity, adherence to standards, etc. Coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc. Memory analyzers - such as bounds-checkers and leak detectors. Load/performance test tools - for testing client/server and web applications under various load levels. Web test tools - to check that links are valid, HTML code usage is correct, clientside and server-side programs work, a web site's interactions are secure. Other tools - for test case management, documentation management, bug reporting, and configuration management. What's the best way to choose a test automation tool? In manual testing, the test engineer exercises software functionality to determine if the software is behaving in an expected way. This means that the tester must be able to judge what the expected outcome of a test should be, such as expected data outputs, screen messages, changes in the appearance of a User Interface, XML files, database changes, etc. In an automated test, the computer does not have human-like 'judgment' capabilities to determine whether or not a test outcome was correct. This means there must be a mechanism by which the computer can do an automatic comparison between actual and expected results for every automated test scenario and unambiguously make a pass or fail determination. Page 42 of 52

Read through information on the web about test automation such as general information available on some test tool vendor sites or some of the automated testing articles. Read some books on test automation. Obtain some test tool trial versions or low cost or open source test tools and experiment with them. Attend software testing conferences or training courses related to test automation As in anything else, proper planning and analysis are critical to success in choosing and utilizing an automated test tool. Choosing a test tool just for the purpose of 'automating testing' is not useful; useful purposes might include: testing more thoroughly, testing in ways that were not previously feasible via manual methods (such as load testing), testing faster, or reducing excessively tedious manual testing. Automated testing rarely enables savings in the cost of testing, although it may result in software lifecycle savings (or increased sales) just as with any other quality-related initiative. With the proper background and understanding of test automation, the following considerations can be helpful in choosing a test tool: Analyze the current non-automated testing situation to determine where testing is not being done or does not appear to be sufficient Where is current testing excessively time-consuming? Where is current testing excessively tedious? What kinds of problems are repeatedly missed with current testing? What testing procedures are carried out repeatedly (such as regression testing or security testing)? What testing procedures are not being carried out repeatedly but should be? What test tracking and management processes can be implemented or made more effective through the use of an automated test tool? Taking into account the testing needs determined by analysis of these considerations and other appropriate factors, the types of desired test tools can be determined. For each type of test tool (such as functional test tool, load test tool, etc.) the choices can be further narrowed based on the characteristics of the software application. The relevant characteristics will depend, of course, on the situation and the type of test tool and other factors. Such characteristics could include the operating system, GUI components, development languages, web server type, etc. Other factors affecting a choice could include experience level and capabilities of test personnel, advantages/disadvantages in developing a custom automated test tool, tool costs, tool quality and ease of use, usefulness of the tool on other projects, etc. Once a short list of potential test tools is selected, several can be utilized on a trial basis for a final determination. Any expensive test tool should be thoroughly analyzed during its trial period to ensure that it is appropriate and that it's capabilities and limitations are well understood. This may require significant time or training, but the alternative is to take a major risk of a mistaken investment.

Page 43 of 52

Why is it often hard for organizations to get serious about quality assurance? Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable: In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. The physician was asked which of his family the most skillful healer was. He replied, "I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords." "My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and neighbors." "My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown outside our home." This is a problem in any business, but it's a particularly difficult problem in the software industry. Software quality problems are often not as readily apparent as they might be in the case of an industry with more physical products, such as auto manufacturing or home construction. Additionally, many organizations are able to determine who is skilled at fixing problems, and then reward such people. However, determining who has a talent for preventing problems in the first place, and figuring out how to incentivize such behavior, is a significant challenge. Who is responsible for risk management? Risk management means the actions taken to avoid things going wrong on a software development project, things that might negatively impact the scope, quality, timeliness, or cost of a project. This is, of course, a shared responsibility among everyone involved in a project. However, there needs to be a 'buck stops here' person who can consider the relevant tradeoffs when decisions are required, and who can ensure that everyone is handling their risk management responsibilities. It is not unusual for the term 'risk management' to never come up at all in a software organization or project. If it does come up, it's often assumed to be the responsibility of QA or test personnel. Or there may be a 'risks' or 'issues' section of a project, QA, or test plan, and it's assumed that this means that risk management has taken place. It's generally NOT a good idea for a test lead, test manager, or QA manager to be the 'buck stops here' person for risk management. Typically QA/Test personnel or managers are not managers of developers, analysts, designers and many other project personnel, and so it would be difficult for them to ensure that everyone on a project is handling their risk management responsibilities. Additionally, knowledge of all the considerations that go into risk management mitigation and tradeoff decisions is rarely the province of QA/Test personnel or managers. Based on these factors, the project manager is usually

Page 44 of 52

the most appropriate 'buck stops here' risk management person. QA/Test personnel can, however, provide input to the project manager. Such input could include analysis of quality-related risks, risk monitoring, process adherence reporting, defect reporting, and other information. Who should decide when software is ready to be released? In many projects this depends on the release criteria for the software. Unfortunately it is nearly impossible to adequately specify useful criteria without a significant amount of assumptions and subjectivity. For example, if the release criteria are based on passing a certain set of tests, there is likely an assumption that the tests have adequately addressed all appropriate software risks. Additionally, since most software projects involve a balance of quality, timeliness, and cost, testing alone cannot address how to balance all three of these competing factors when release decisions are needed. A typical approach is for a lead tester or QA or Test manager to be the release decision maker. This again involves significant assumptions - such as an assumption that the test manager understands the spectrum of considerations that are important in determining whether software quality is 'sufficient' for release, or the assumption that quality does not have to be balanced with timeliness and cost. In many organizations, 'sufficient quality' is not well defined, is extremely subjective, may have never been usefully discussed, or may vary from project to project or even from day to day. Release criteria considerations can include deadlines, sales goals, business/market/competitive considerations, business segment quality norms, legal requirements, technical and programming considerations, end-user expectations, internal budgets, impacts on other organization projects or goals, and a variety of other factors. Knowledge of all these factors is often shared among a number of personnel in a large organization, such as the project manager, director, customer service manager, technical lead or manager, marketing manager, QA manager, etc. In smaller organizations or projects it may be appropriate for one person to be knowledgeable in all these areas, but that person is typically a project manager, not a test lead or QA manager. For these reasons, it's generally not a good idea for a test lead, test manager, or QA manager to decide when software is ready to be released. Their responsibility should be to provide input to the appropriate person or group that makes a release decision. For small organizations and projects that person could be a product manager or a project manager. For larger organizations and projects, release decisions might be made by a committee of personnel with sufficient collective knowledge of the relevant considerations. What can be done if requirements are changing continuously? This is a common problem for organizations where there are expectations that requirements can be pre-determined and remain stable. If these expectations are reasonable, here are some approaches:

Page 45 of 52

Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible. It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch. If the code is well-commented and well-documented this makes changes easier for the developers. Use some type of rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. The project's initial schedule should allow for some extra time commensurate with the possibility of changes. Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version. Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application. Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted. Balance the effort put into setting up automated testing with the expected effort required to refactor them to deal with changes. Try to design some flexibility into automated test scripts. Focus initial automated testing on application aspects that are most likely to remain unchanged. Devote appropriate effort to risk analysis of changes to minimize regression testing needs. Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans) Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails). If this is a continuing problem, and the expectation that requirements can be predetermined and remain stable is NOT reasonable, it may be a good idea to figure out why the expectations are not aligned with reality, and to refactor an organization's or project's software development process to take this into account. It may be appropriate to consider agile development approaches. What if the application has functionality that wasn't in the requirements? It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it could indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. (If the functionality is minor and low risk then no action may be necessary.) If not removed, information will be needed to determine risks and to determine any added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality.

Page 46 of 52

This problem is a standard aspect of projects that include COTS (Commercial Off-TheShelf) software or modified COTS software. The COTS part of the project will typically have a large amount of functionality that is not included in project requirements, or may be simply undetermined. Depending on the situation, it may be appropriate to perform indepth analysis of the COTS software and work closely with the end user to determine which pre-existing COTS functionality is important and which functionality may interact with or be affected by the non-COTS aspects of the project. A significant regression testing effort may be needed (again, depending on the situation), and automated regression testing may be useful. How can Software QA processes be implemented without reducing productivity? By implementing QA processes slowly over time, using consensus to reach agreement on processes, focusing on processes that align tightly with organizational goals, and adjusting/experimenting/refactoring as an organization matures, productivity can be improved instead of stifled. Problem prevention will lessen the need for problem detection, panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, avoid a 'Process Police' mentality, minimize paperwork, promote computerbased processes and automated tracking and reporting, minimize time required in meetings, and promote training as part of the QA process. However, no one - especially talented technical types - likes rules or bureaucracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning, reviews, and inspections will be needed, but less time will be required for late-night bug-fixing and handling of irate customers. How can it be determined if a test environment is appropriate? This is a difficult question in that it typically involves tradeoffs between 'better' test environments and cost. The ultimate situation would be a collection of test environments that mimic exactly all possible hardware, software, network, data, and usage characteristics of the expected live environments in which the software will be used. For many software applications, this would involve a nearly infinite number of variations, and would clearly be impossible. And for new software applications, it may also be impossible to predict all the variations in environments in which the application will run. For very large, complex systems, duplication of a 'live' type of environment may be prohibitively expensive. In reality judgments must be made as to which characteristics of a software application environment are important, and test environments can be selected on that basis after taking into account time, budget, and logistical constraints. Such judgments are preferably made by those who have the most appropriate technical knowledge and experience, along with an understanding of risks and constraints.

Page 47 of 52

For smaller or low risk projects, an informal approach is common, but for larger or higher risk projects (in terms of money, property, or lives) a more formalized process involving multiple personnel and significant effort and expense may be appropriate. In some situations it may be possible to mitigate the need for maintenance of large numbers of varied test environments. One approach might be to coordinate internal testing with beta testing efforts. Another possible mitigation approach is to provide builtin automated tests that run automatically upon installation of the application by endusers. These tests might then automatically report back information, via the internet, about the application environment and problems encountered. What's the best approach to software test estimation? The 'best approach' is highly dependent on the particular organization and project and the experience of the personnel involved. For example, given two software projects of similar complexity and size, the appropriate test effort for one project might be very large if it was for life-critical medical equipment software, but might be much smaller for the other project if it was for a low-cost computer game. A test estimation approach that only considered size and complexity might be appropriate for one project but not for the other. Following are some approaches to consider. Implicit Risk Context Approach: A typical approach to test estimation is for a project manager or QA manager to implicitly use risk context, in combination with past personal experiences in the organization, to choose a level of resources to allocate to testing. In many organizations, the 'risk context' is assumed to be similar from one project to the next, so there is no explicit consideration of risk context. (Risk context might include factors such as the organization's typical software quality levels, the software's intended use, the experience level of developers and testers, etc.) This is essentially an intuitive guess based on experience. Metrics-Based Approach: A useful approach is to track past experience of an organization's various projects and the associated test effort that worked well for projects. Once there is a set of data covering characteristics for a reasonable number of projects, then this 'past experience' information can be used for future test project planning. (Determining and collecting useful project metrics over time can be an extremely difficult task.) For each particular new project, the 'expected' required test time can be adjusted based on whatever metrics or other information is available, such as function point count, number of external system interfaces, unit testing done by developers, risk levels of the project, etc. In the end, this is essentially 'judgment based on documented experience', and is not easy to do successfully.

Page 48 of 52

Test Work Breakdown Approach: Another common approach is to decompose the expected testing tasks into a collection of small tasks for which estimates can, at least in theory, be made with reasonable accuracy. This of course assumes that an accurate and predictable breakdown of testing tasks and their estimated effort is feasible. In many large projects, this is not the case. For example, if a large number of bugs are being found in a project, this will add to the time required for testing, retesting, bug analysis and reporting. It will also add to the time required for development, and if development schedules and efforts do not go as planned, this will further impact testing. Iterative Approach: In this approach for large test efforts, an initial rough testing estimate is made. Once testing begins, a more refined estimate is made after a small percentage (eg, 1%) of the first estimate's work is done. At this point testers have obtained additional test project knowledge and a better understanding of issues, general software quality, and risk. Test plans and schedules can be refactored if necessary and a new estimate provided. Then a yet-more-refined estimate is made after a somewhat larger percentage (eg, 2%) of the new work estimate is done. Repeat the cycle as necessary/appropriate. Percentage-of-Development Approach: Some organizations utilize a quick estimation method for testing based on the estimated programming effort. For example, if a project is estimated to require 1000 hours of programming effort, and the organization normally finds that a 40% ratio for testing is appropriate, then an estimate of 400 hours for testing would be used. This approach may or may not be useful depending on the project-to-project variations in risk, personnel, types of applications, levels of complexity, etc. Successful test estimation is a challenge for most organizations, since few can accurately estimate software project development efforts, much less the testing effort of a project. It is also difficult to attempt testing estimates without first having detailed information about a project, including detailed requirements, the organization's experience with similar projects in the past, and an understanding of what should be included in a 'testing' estimation for a project . How can World Wide Web sites be tested? Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, web services, encrypted communications, Internet connections, firewalls, applications that run in web pages (such as javascript, flash, other plug-in applications), applications that run on the server side (database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:

Page 49 of 52

What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing? Who is the target audience? What kind of browsers will they be using? What kinds of connection speeds will they have? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)? What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should flash, applets, etc. load and run)? Will down time for server and content maintenance/upgrades be allowed? How much? What kinds of security (firewalls, encryption, passwords, functionality, etc.) will be required and what is it expected to do? How can it be tested? How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing? What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.? Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers? Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site?? How will internal and external links be validated and updated? How often? Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, connection variability, and real-world internet 'traffic congestion' problems to be accounted for in testing? How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing? How are flash, applets, JavaScripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested? Some usability guidelines to consider - these are subjective and may or may not apply to a given situation (Note: more information on usability testing issues can be found in articles about web site usability in the 'Other Resources' section): Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page. The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site. Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type. All pages should have links external to the page; there should be no dead-end pages. ### Section 8 Scenario Based Testing Ends Here ### Page 50 of 52

Section 9 - Processes & Standards What is 'configuration management'? Configuration management is the task of tracking and controlling changes in the software. Configuration management practices include revision control and the establishment of baselines. It covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes. The Tasks involved in Configuration Management are as follows: Configuration identification - What code are we working with? Configuration control - Controlling the release of a product and its changes. Status accounting - Recording and reporting the status of components. Review - Ensuring completeness and consistency among components. Build management - Managing the process and tools used for builds. Process management - Ensuring adherence to the organization's development process. Environment management - Managing the software and hardware that host our system. Teamwork - Facilitate team interactions related to the process. Defect tracking - Making sure every defect has traceability back to the source What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help? SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes. CMM = 'Capability Maturity Model', It is now called as CMMI ('Capability Maturity Model Integration') It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors. Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable. Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.

Page 51 of 52

Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance. Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high. Level 5 - the focus is on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required. ISO = 'International Organization for Standardization' - The ISO 9001:2000 standard concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed. IEEE = 'Institute of Electrical and Electronics Engineers' - Among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others. ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).

Page 52 of 52

You might also like