0% found this document useful (0 votes)
40 views

Testing

The document discusses different types of software testing: - Unit testing focuses on debugging individual code units and is conducted by programmers. It aims to find coding mistakes. - Module testing integrates units into functional modules and is more rigorous. It examines interfaces and data manipulation and can find design and requirements issues. - Integration testing combines modules to test interfaces and latent defects. It requires creative testing of valid and invalid conditions. - User acceptance testing demonstrates compliance with requirements and is intended to challenge the software from a black box perspective. It shifts the focus from finding defects to showing the software meets requirements.

Uploaded by

varundhir199104
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Testing

The document discusses different types of software testing: - Unit testing focuses on debugging individual code units and is conducted by programmers. It aims to find coding mistakes. - Module testing integrates units into functional modules and is more rigorous. It examines interfaces and data manipulation and can find design and requirements issues. - Integration testing combines modules to test interfaces and latent defects. It requires creative testing of valid and invalid conditions. - User acceptance testing demonstrates compliance with requirements and is intended to challenge the software from a black box perspective. It shifts the focus from finding defects to showing the software meets requirements.

Uploaded by

varundhir199104
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Unit testing Unit testing, the most basic type of software testing, ignores the concept that document

reviews are really a type of testing. Unit testing is usually conducted by the individual producer. Unit testing is primarily a debugging activity that concentrates on the removal of coding mistakes. It is part and parcel with the coding activity itself. Even though unit testing is conducted almost as a part of the day-to- day development activity, there must be some level of planning for it. The programmer should document at least the test data and cases he or she plans to use and the results expected from each test. Part of each walkthrough or inspection of the software should be dedicated to the review of the unit test plans so that peers can be sure the programmer has given thought to the test needs at that level. It is worth reiterating the tenet that the tests run must be oriented to finding defects in the software, not to showing that the software runs as it is written. Further, the defects found will include not only mistakes in the coding of the unit, but design and even requirements inadequacies or outright mistakes. Even though the unit is the smallest individually compliable portion of the software system, its interfaces and data manipulation can point out wide-reaching defects. Informal though it may be, the unit testing activity is the first chance to see some of the software in action. It can be seen that the rule of finding defects, not showing that software runs, could be in jeopardy here. In fact, a tradeoff is in play with having programmers test their own software. The expense, in both time and personnel, to introduce an independent tester at this point usually offsets the danger of inadequate testing. With high-quality peer reviews and good, though informal, documentation of the tests and their results, the risk is reduced to a low level. Software quality practitioners in their audits of the UDF and their reviews of the testing program as a whole will also pay close attention to the unit test plans and results.

Module Testing Module testing is a combination of debugging and integration. It is sometimes called glass box testing (or white box testing), because the tester has good visibility into the structure of the software and frequently has access to the actual source code with which to develop the test strategies. As integration proceeds, the visibility into the actual code is diminished. As units are integrated into their respective modules, the testing moves appropriately from a unit testing that is, debuggingmode into the more rigorous module testing mode. Module integration and testing examine the functional entities of the system. Each module is assigned some specific function of the software system to perform. As the units that make up the module are brought together into that functional unit, the module tests are run.

The testing program becomes somewhat more rigorous at the module level because the individual programmer is not now the primary tester. There will be in place a more detailed test plan, sets of data and test cases, and expected results. The recording of defects is also more comprehensive at this stage of the test program. Defects are recorded in defect history logs, and regularized test reports are prepared. As they are found, the defects are fed back into the code and unit test phase for correction. Each defect is tracked from its finding and reporting through its correction and retest. The results of the correction are monitored and controlled by the configuration management system that is begun at this point in the SLC. That is important, since many of the errors that have been made and defects that have been discovered will affect the design and requirements documentation. Most of the minor coding mistakes will have been caught and corrected in the unit testing process. The defects that are being found in the module tests are more global in nature, tending to affect multiple units and modules. Defects in interfaces and data structures are common, but a significant number of the defects will involve deficiencies in the design and requirements. As those deficiencies come to light and are corrected, the design and requirements baselines will change. It is critical to the rest of the SLC that close control of the evolving documents be maintained. If the corrections to the defects found in the test program are allowed to change the products of earlier SLC phases without proper control and documentation, the software system quickly can get out of control. When a requirement or the design changes without commensurate modification to the rest of the system, there will come a time when the various pieces do not fit together, and it will not be clear which versions of the units and modules are correct. Software quality practitioners will have reviewed the test plans and the rest of the documentation prior to the module testing. Software quality practitioners are also expected to review the results of the testing. Their reviews ensure that defects will be recorded, tracked, resolved, and configuration-managed.

Integration Testing Integration testing may be considered to have officially begun when the modules begin to be tested together. This type of testing sometimes is referred to as gray box testing, referring to the limited visibility into the software and its structure. As integration proceeds, gray box testing approaches black box testing, which is more nearly pure function testing, with no reliance on knowledge of the software structure or the software itself.

As modules pass their individual tests, they are brought together into functional groups and tested. Testing of the integrated modules is designed to find latent defects as well as interface and database defects. Because testing up to this point has been of individual modules, several types of defects cannot be detected. Such things as database interference, timing conicts, interface mismatches, memory overlaps, and so on, are found only when the modules are forced to work together in integrated packages. Integration testing uses the same sorts of conditions and data as the individual module tests. Valid data and messages are input, as are invalid conditions and situations. The test designer must be creative in coming up with valid combinations of possible circumstances but with illegal or invalid conditions. How the integrated software responds to those situations is noted, as well as the software's performance with valid inputs. Integration testing is the level at which the quality control practitioner or tester begins to see differences between traditional systems and client- server or distributed processing applications. The greatly increased sets of inputs and initial conditions require some more elaborate testing schemes such as record and playback, automated test generation, software characterization, data equivalence, and sampling. The reporting of test results is important in the integration test period. How the software responds is recorded and analyzed so corrections can be made that fix the defect but do not introduce new defects somewhere else. Error and defect logs should be maintained for trend analysis that can point to particularly vulnerable portions of the software and its development. Those portions can then receive additional testing to ferret out deepseated anomalies and improper responses. Close control must be maintained of the configuration of the software system through this period so that all changes are properly documented and tracked. It is in this time frame that many software systems get out of hand and accounting is lost as to which version of which unit, module, or subsystem is the proper one to use at any point. It is the integration test phase that will uncover many hidden defects with the design and requirements. Formal reviews and less formal walkthroughs and inspections have been used to find many of the design and requirements defects. But as the software is put into use in an increasingly realistic manner, other defects may surface that were beyond the depth of the earlier defect-finding efforts. As defects are found in the design or requirements, they must be corrected and changes to the earlier documents made. That in turn may necessitate rework of design, code, and earlier testing. Finding such serious defects at this point is expensive but less so than finding the defects in the operations phase. Thus, every effort must be made to maximize the defect-finding capabilities of the integration tests. An important role for the software quality practitioner in this effort is the review of the integration test plans, cases, scenarios, and procedures. Software quality practitioners should make every effort to ensure that the integration tests cover the full range of capabilities of the integrated set of modules. Review of the test results and the decisions made on the

basis of those results also should be reviewed and approved by the software quality practitioner before testing progresses beyond the integration phase.

4.1.4 User or acceptance testing User testing is intended primarily to demonstrate that the software complies with its requirements. This type of testing is black box testing, which does not rely on knowledge of the software or the structure of the software. Acceptance testing is intended to challenge the software in relation to its satisfaction of the functional requirements. Acceptance tests are planned based on the requirements approved by the user or customer. All testing up to this time has been oriented to finding defects in the software. Earlier tests also were based on the requirements, but they were designed to show that the software did not comply in one fashion or another to the requirements. By the time the acceptance testing stage is reached, the software should be in a sufficiently defectfree state to permit the emphasis to change. One important aspect of the acceptance test is that, whenever possible, it is performed by actual intended users of the system. In that way, while it is being shown that the software complies with its requirements, there is still the opportunity to introduce anomalous user actions that have not yet been encountered. Persons unfamiliar with the system may enter data in incorrect, though technically permitted, ways. They may push the wrong buttons or the correct buttons in an incorrect sequence. The software's response to those unexpected or incorrect situations is important to the userthe system should not collapse due to human mistakes. The overriding requirement for every system is that it performs its intended function. That means that if incorrect actions or data are presented, the system will not just abort but will tell the user what has been done wrong and will provide the user the opportunity to retry the action or input. Invalid data received from outside sources also should be treated in such a manner as to prevent collapse of the system. Another important consideration of an acceptance test is verification that the new software does not cause changes to workow or user responsibilities that have been overlooked. While it may be shown that the software perform exactly as expected, the associated human-factor changes may make the system difficult to use or cause negative effects on the related work of the users. The acceptance or user test is usually the last step before the user or customer takes possession of the software system. It is important that software quality and configuration management practitioners play active roles in the review and execution of the tests and the change management of the system during this period. Software quality practitioners may even have performed the full execution of the acceptance test as a dry run prior to the release of the system for the user operation of the test. Configuration managreement of the system at this time is critical to the eventual delivery of the exact system that passes the acceptance test.

4.1.5 Special types of tests Four types of tests may be considered to fall into the "special" category. These tests are planned and documented according to the same rules and standards as the other types of tests, but they have specific applications. The four major special tests are regression tests, stress tests, recovery tests, and back-out and restoration tests.

Regression tests Regression tests show that modifications to one part of the system have not invalidated some other part. Regression tests usually are a subset of the user or acceptance test. They are maintained for verification that changes made as a result of defects or enhancements during operation do not result in failures in other parts of the system. Regression tests are an abbreviated revalidation of the entire system using generally valid data to show that the parts that were operating correctly before the changes are still performing as required. Discussions "around the water cooler" indicate that as many as 50% of all changes made to a software system result in the introduction of new defects. This figure may be low or high, but there is a significant risk to the introduction of corrections. Some, of course, are errors in the change being made, such as coding errors and change design mistakes. Others, however, come from unexpected interactions with subsystems other than the one being modified. A change to the way a database variable is updated in one module may affect the time at which another module should read that variable in its own computations. Close configuration management control and analysis of changes and their impact on the system as a whole are imperative. Software quality practitioners must be sure that a change control board or equivalent function is involved in all change activity during both integration testing and the operation phases of the SLC. That protects the integrity of the baseline system itself and helps ensure that changes are being made to the correct versions of the affected software. Delivery of the proper versions of the modifications is also a function of configuration management that software quality practitioners must monitor.

Stress tests Stress tests cover the situations that occur when the software is pushed to or beyond its limits of required capability. Such situations as the end of the day, when the software is required to recognize that 00:00:00 is later than 23:59:59, must be challenged. The rollover of the year field also is a situation ripe for testing. Will the software realize that the years "00" and "000" are later than the years "99" and "999," respectively? Other stress situations occur when the software is presented with the full number of transactions it is expected to handle plus one or two more. What happens when transaction n + 1 is presented? Does one of the existing

transactions get overwritten? Is there a weighting algorithm that selects some transaction for replacement? Is the new transaction merely ignored? Still another case is the situation in which the software is run for a long time without interruption. Such a case could easily expose aws in housekeeping or initialization routines. Stress tests are an important part of any test program. The types of stress that might be exercised will become apparent as the software develops and the testers understand its construction more clearly. The requirements statement should spell out a valid way of handling these and other situations. The compliance of the software with the requirement is to be challenged.

Recovery tests Most data centers have recovery procedures for the repair of data on a damaged disk or tape, and they also consider the case of operator errors that may invalidate some of the data being processed. Recovery testing is conducted when a hardware fault or operating error damages the software or the data. This type of testing is critical to the confidence of the user when a data or software restoration has been performed. Often, restoration testing can be accomplished by using the regression test software. In other cases, the full acceptance test might be required to restore confidence in the software and its data.

Back-out and restoration tests To back out and restore is the decision to remove a new software system in favor of the older version that it replaced. Needless to say, developers usually are embarrassed by such an action. It is recognition that the new system was insufficiently tested or was so error-ridden that it was worse to use than the old system. In a back-out and restoration situation, the new system is removed from production, any new database conditions are restored to the way they would have been under the old system, and the old system itself is restarted. In the least critical case, the database used by the new system is the same as that of the old system. More often than not, the new system provides expanded database content as well as improved processing. When the contents of the new database must be condensed back into the form of the old database, care must be taken to restore the data to the form in which the old system would have used it. The testing required includes at least the acceptance test of the old system, which often is augmented by the running of the most recent set of regression tests used for the old system. Clearly, there must have been some planning for back-out and replacement when the new system was installed. The old system normally would have been archived, but the saving of the acceptance test and the regression tests must also have been part of the archiving process. It is rare that a newly installed system is so badly awed that it must be replaced. However, it is the responsibility of the quality practitioner to make

management aware of the threat, no matter how remote.

Test planning and conduct Testing is like any other project. It must be planned, designed, documented, reviewed, and conducted.

4.2.1 Test plans Because proper testing is based on the software requirements, test planning starts during the requirements phase and continue throughout the SDLC. As the requirements for the software system are prepared, the original planning for the test program also gets underway. Each requirement eventually will have to be validated during the acceptance testing. The plans for how that requirement will be demonstrated are laid right at the start. In fact, one of the ways the measurable and testable criteria for the requirements are determined is by having to plan for the test of each requirement. The test planning at this point is necessarily high level, but the general thrust of the acceptance demonstration can be laid out along with the approaches to be used for the intermediate testing. Requirements traceability matrices (RTMs), which track the requirements though design and down to the code that implements them, are used to prepare test matrices. These matrices track the requirements to the tests that demonstrate software compliance with the requirements. Figure 4.2 is an example of what a test traceability matrix might look like. Each requirement, both functional and interface is traced to the primary (P) test that demonstrates its correct implementation. In an ideal test situation, each requirement will be challenged by one specific test. That is rarely the case, but redundant testing of some requirements and the failure to test others are quickly apparent in the RTM. Figure 4.2 also indicates other tests in which the requirements are involved (I). In this way, there is some indication of the interrelationships between the various requirements. As the software matures and requirements are modified, this matrix can offer clues to unexpected and usually undesirable results if a requirement is changed or eliminated. Conicts between requirements can sometimes be indicated in the RTM, as the I fields are completed. A common example of requirements conicts is the situation that calls for high-speed processing and efficient use of memory, as in the case of real-time, imbedded software. The fastest software is written in highly linear style with little looping or calling of subroutines. Efficient use of memory calls for tight loops,

subroutine calls, and other practices that tend to consume more processing time. Figure 4.2 is an example of an RTM at the system or black box testing level since the requirements are noted as functions. As the SDLC progresses, so does the planning for the testing, and the RTM becomes more and more detailed until each specific required characteristic of the software has been challenged in at least one test at some level. Not every requirement can or should be tested at every level of the test program. Compliance with some can be tested at the white box level; some cannot be fully challenged until the black box testing is in progress.

The RTM is also important as the requirements evolve throughout the development of the software system. As the requirements that form the basis for testing are changed, added, or eliminated, each change likewise is going to affect the test program. Just as the requirements are the basis for everything that follows in the development of the software, so, too, are they the drivers for the whole test program. Some items of test planning are necessarily left until later in the SDLC. Such things as the bases for regression testing are determined during the acceptance test period as the final requirements baseline is determined. Likewise, as new requirements are determined, so are the plans for testing those requirements. Even though some parts of the test planning will be done later, the overall test plan is completed during the requirements phase. It is also, therefore, one of the subjects of the system requirements review at the end of the requirements phase. As the approved requirements are released for the design phase activities, the approved test plans are released to the test design personnel for the beginning of the design of test cases and procedures. Figure 4.3 depicts the ow of testing, starting with the test plan and culminating in the test reports. 4.2.2 Test cases The first step in function testing, and often in input/output testing, is to construct situations that mimic actual use of the software. These situations, or test cases, should represent actual tasks that the software user might perform. Once the test cases have been developed, the software requirements that are involved in each test case are identified. A check is made against the RTM to be sure that each requirement is included in at least one test case. If a test case is too large or contains too many requirements, it should be divided into subtest cases or scenarios. Test cases (and scenarios) should be small enough to be manageable. Limited size makes sure that errors uncovered can be isolated with minimum delay to and effect on the balance of the testing. Consider the case of testing the software in a point-ofsale terminal for a convenience store. The store stocks both grocery and fuel products. The

test cases might be as follow. 1. Open the store the very first time. This would test the requirements dealing with the variety of stock items to be sold, their prices, and the taxes to be applied to each item. It also includes requirements covering the setting of the initial inventory levels. 2. Sell products. Sales of various products might be further divided into test scenarios such as: Sell only fuel. This scenario includes those requirements that deal with pump control, fuel levels in the tanks, and the prices and volume of fuel sold. It also tests those requirements that because the sale to be recorded and the register tape to be printed. Sell only grocery items. Here, the sales events are keyed in on the terminal rather than read from a pump register, so there are requirements being tested that are different from the preceding scenario. The sales recording requirements are probably the same. Sell both fuel and grocery items. This scenario, building on the first two, causes the previous requirements to be met in a single sale. There may be additional requirements that prevent the keying of a grocery sale to adversely affect the operation of the pump and vice versa. Other requirements might deal with the interaction of pump register readings with key-entered sales data. Further, a test of the ability to add pump sale charges to keyed sales charges is encountered. 3. Restock the store. After sufficient items have been sold, it becomes necessary to restock shelves and refill fuel tanks. This test case might also deal with the changing of prices and taxes and the modification of inventory levels. It can be seen as an extension of the requirements tested in test case 1. 4. Close the store for the last time. Even the best businesses eventually close. This test case exercises the requirements involved in determining and reporting the value of the remaining inventory. Some of these same requirements might be used in tallying weekly or other periodic inventory levels for business history and planning tasks. Should comparison of the test cases and scenarios with the RTM reveal leftover requirements, additional situations must be developed until each requirement is included in at least one test case or scenario. Although this has been a simple situation, the example shows how test cases and scenarios can be developed using the actual anticipated use of the software as a basis.

4.2.3 Test procedures As design proceeds, the test plans are expanded into specific test cases, test scenarios, and step-

by-step procedures. Test procedures are step-by-step instructions that spell out the specific steps that will be taken in the execution of the test being run. They tell which buttons to push, what data to input, what responses to look for, and what to do if the expected response is not received. The procedures also tell the tester how to process the test outputs to determine if the test passed or failed. The test procedures are tied to the test cases and scenarios that actually exercise each approved requirement. The software quality practitioner reviews the test cases and scenarios, the test data, and the test procedures to ensure that they all go together and follow the overall test plan and that they fully exercise all the requirements for the software system. Figure 4.4 is a sample test procedure form.

4.2.4 Test data input Input of test data is the key to testing and comes from a variety of sources. Traditionally, test data inputs have been provided by test driver software or tables of test data that are input at the proper time by an executive test control module specially written for the purpose. These methods are acceptable when the intent is to provide a large number of data values to check repetitive calculations or transaction processors. The use of these methods does diminish the interactive capability of the test environment. The sequential data values are going to be presented regardless of the result of the preceding processing. As the software system being tested becomes more complex, particularly in the case of interactive computing, a more exible type of test environment is needed. Simulators, which are test software packages that perform in the same manner as some missing piece of hardware or other software, frequently are used. Simulators can be written to represent anything from a simple interfacing software unit to a complete spacecraft or radar installation. As data are received from the simulator and the results returned to it, the simulator is programmed to respond with new input based on the results of the previous calculations of the system under test. Another type of test software is a stimulator, which represents an outside software or hardware unit that presents input data independently from the activities of the system under test. An example might be the input of a warning message that interrupts the processing of the system under test and forces it to initiate emergency measures to deal with the warning. The final step in the provision of interactive inputs is the use of a keyboard or a terminal that is being operated by a test user. Here the responses to the processing by the system under test are, subject to the constraints of the test procedures, the

same as they will be in full operation. Each type of data input fulfills a specific need as called out in the test documentation. The software quality practitioner will review the various forms of test data inputs to be sure that they meet the needs of the test cases and that the proper provisions have been made for the acquisition of the simulators, stimulators, live inputs, and so on.

4.2.5 Expected results Documentation of expected results is necessary so that actual results can be evaluated to demonstrate test success or failure. The bottom line in any test program is the finding of defects and the demonstration that the software under test satisfies its requirements. Unless the expected results of each test are documented, there is no way to tell if the test has done what was intended by the test designer. Each test case is expected to provide the test data to be input for it. In the same way, each test case must provide the correct answer that should result from the input of the data. Expected results may be of various sorts. The most common, of course, is simply the answer expected when a computation operates on a given set of numbers. Another type of expected result is the lighting or extinguishing of a light on a console. Many combinations of these two results may also occur, such as the appearance of a particular screen display, the starting of a motor, the initiation of an allied software system, or even the abnormal end of the system under test when a particular illegal function has been input, for example, an invalid password into a software security system. It is the responsibility of the software quality practitioner to review the test documentation to ensure that each test has an associated set of expected results. Also present must be a description of any processing of the actual results so they can be compared with the expected results and a pass/fail determination made for the test.

4.2.6 Test analysis Test analysis involves more than pass/fail determination. Analyses of the expected versus actual results of each test provide the pass or fail determination for that test. There may be some intermediate processing necessary before the comparison can be made, however. In a case in which previous real sales data is used to check out a new inventory system, some adjustment to the actual results may be necessary to allow for the dating of the input data or the absence of some allied software system that it was not cost effective to simulate. In any case, the pass/fail criteria are applied to the expected and received results and the success of the test determined.

Other beneficial analysis of the test data is possible and appropriate. As defects are found during the testing or as certain tests continue to fail, clues may arise as to larger defects in the system or the test program than are apparent in just a single test case or procedure. As test data are analyzed over time, trends may appear that show certain modules to be defect prone and in need of special attention before the test program continues. Other defects that might surfaces include inadequate housekeeping of common data areas, inappropriate limits on input or intermediate data values, unstated but implied requirements that need to be added and specifically addressed, design errors, sections of software that are never used or cannot be reached, and erroneous expected results. Software quality practitioners can play an important role in the review and analysis of test results. It is not as important that software quality practitioners actually perform the analysis as it is that they ensure adequate analysis by persons with the proper technical knowledge. This responsibility of software quality practitioners is discharged through careful review of the test results and conclusions as those results are published.

4.2.7 Test tools Many automated and manual test tools are available to assist in the various test activities. A major area for the application of tools is in the area of test data provision. Commercially available software packages can help in the creation and insertion of test data. Test data generators can, on the basis of parameters provided to them, create tables, strings, or files of fixed data. Those fixed data can, in turn, be input either by the test data generator itself or by any of several test input tools. General-purpose simulators can be programmed to behave like certain types of hardware or software systems or units. Stimulators that provide synchronous or asynchronous interrupts or messages are available. It is more likely, though, that most of these tools will be created in-house so they can be tailored to the test application at hand. Another area in which tools are available is that of data recording. Large-scale event recorders often are used to record long or complicated interactive test data for future repeats of tests or for detailed test data analysis. In association with the data recorders are general- and specific-purpose data reduction packages. Large volumes of data are often sorted and categorized so that individual analyses can be made of particular areas of interest. Some very powerful analysis packages are commercially available, providing computational and graphic capabilities that can be of great

assistance in the analysis of test results and trend determination. Other valuable tools in the test area are path analyzers. These tools monitor the progress of the test program and track the exercising of the various paths through the software. While it is impossible to execute every path through a software system of more than a few steps, it is possible to exercise every decision point and each segment of code. (A segment in this context means the code between two successive decision points.). A path analyzer will show all software that has been executed at least once, point out any software that has not been exercised, and clearly indicate those code segments that cannot be reached at all (e.g., a subroutine that never gets called or a decision point that, for some reason, cannot take a branch). Many of these tools are commercially available. Most applications of them, however, are in the form of tools specifically designed and built for a given project or application. Some development organizations will custom- build test completeness packages that software quality practitioners will use prior to acceptance testing or, perhaps, system release. Whatever their source or application, test tools are becoming more and more necessary as software systems grow in size, complexity, and criticality. Software quality practitioners should monitor the application of test tools to be sure that all appropriate use is being made of them and that they are being used correctly.

4.2.8 Reviewing the test program An important part of the software quality practitioner's activity is the review of the test program. As discussed in Section 3.3.3, review of the test documentation is important. In fact, the full test program should be reviewed regularly for status, sufficiency, and success. Such reviews are expected to be an integral part of the major phase-end reviews, as explained in Section 3.1.2. It is reasonable to hold less formal, in process reviews of the test program as testing progresses and more of the software system is involved. The development test documentation permits this review of the whole test approach as it is formulated. Without a documented approach to the problems of testing the software, the testing tends to become haphazard and undisciplined. There is a strong tendency on the part of many project managers to commit to a firm delivery date. If the project gets behind schedule, the slippage is usually made up by shortening the test phase to fit the time remaining. This also happens in the case of budget problems. A well-planned and well-documented test program reduces the temptation to shorten the testing effort to make up for other problems. Having a software quality practitioner review and approve the documentation of

the test program adds even more impetus to maintain the integrity of the program. The documentation of the test program should extend all the way to the unit and module tests. While those tests tend to be more informal than later tests, they, too, should have test cases and specific test data recorded in, at least, the UDF. The results of the unit and module tests also should be recorded. Software quality practitioners will review the results of the unit and module tests to decide, in part, whether the modules are ready for integration. There may even be cases in which the module tests are sufficient to form part of the acceptance test.

4.3 Who does the testing? Until recently, the common preference for who actually performed the testing favored the independent tester. While this is still valid in some very critical software situations, the definition of independent has been changing for most applications. On the basis of the concept that everyone is responsible for his or her own work and that this responsibility also applies to groups, the task of testing is being returned to the developers. That is not to say that programmers should test all their own work, but rather that the development group is responsible for the quality of the software that they deliver. A programmer should test only that software for which he or she has sole responsibility. Once the work of more than one person is to be tested, an independent tester, that is, someone other than the persons involved, should carry out the testing. Even at this level, though, the testers should come from within the development group responsible for the full set of software. Outside testers are necessary only at the full software system test level when all the developers have an investment in the software. Unit, module, and most integration testing are the proper tasks of the development organization. This is consistent with total quality concepts and the idea that persons (or in this case organization) are responsible for the quality of their own work. The very early testing is in the form of debugging, and as the unit tests cover more of the software, they ow into module tests. Module tests, too, are primarily debugging in nature. Even the initial integration tests can be thought of as advanced debugging, although this is more of an organizational decision than an industry wide convention. The characteristic of debugging that separates it from rigorous testing is that defects are generally fixed on the spot without much formal change control. At whatever time the organization institutes some level of change control, the testing is usually considered out of the debugging process and into rigorous testing. That is not to say that there is no configuration

control up to this point. Configuration control is already in effect on the documentation. Any change that affects the requirements or design must be processed formally to maintain the integrity of the documentation and the system as a whole. Changes that merely fix mistakes in the code can be made with minimum control at this stage, since the only elements involved are individual units or modules or small groups of two or three closely interfacing modules prior to actual integration. There should, however, be at least an audit trail of the changes maintrained in the UDF. This trail will be used for error and defect history analysis as the development proceeds. Software quality practitioners should monitor the testing at the unit and module levels to be sure that such an audit trail is provided. Software quality practitioners are also an appropriate resource for the error and defect history analysis. Conclusions reached as a result of the analysis should be fed back, as improvements, into the development process. As the time for full-scale integration and system testing arrives, a test team that is organizationally independent from the producers should take over the testing. Because the goal of the test program is to find defects, the objectivity of an independent test team greatly enhances the quality of the testing. The independent testers will perform the testing tasks all the way to user or acceptance testing. This team is probably the group that produced the formal test program documents. User or acceptance testing should be performed by the users themselves, preferably in the user's environment, to help ensure that the software meets the user's expectations, as well as the officially approved requirements. Table 4.1 suggests appropriate testers for each type of testing. As each organization's test program matures, the identification of the testers for each type of test will be based on the organization's experience and testing approach.

Table 4.1 (refer Horch e-book) Regression tests are conducted by many different persons involved in the SLC. The developers will regressively test changes to modules and subsystems as they make changes in response to trouble reports generated during formal testing or maintenance. The test team also will have occasion to use regression testing as they verify that new modules or subsystems do not adversely affect the system as a whole. Software quality practitioners can even use regressive testing techniques as they perform some of their audit and review tasks. The software quality practitioner's primary role in the testing process, aside from reviewing and approving the test

documents, is to monitor the testing as it progresses. The software quality practitioner will audit the tests against their plans and procedures and report the status of the test program to management. There are added benefits if software quality practitioners have been doing more than just cursory reviews of the documentation as it has been produced. The cross-fertilization of the technical knowledge of the system and the test planning for the system can produce better results in both areas.

You might also like