0% found this document useful (0 votes)
4 views124 pages

UNIT-III _ST

Uploaded by

suganthv.21aim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views124 pages

UNIT-III _ST

Uploaded by

suganthv.21aim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 124

Unit-III

Overview, Organizing and


Developing the Test process
Unit-III
Overview, Organizing and
Developing the Test process
Syllabus
The Seven Step Software Testing Process:
Overview of a Software Testing Process -
Organizing for Testing – Workbench-
Procedure - Developing the Test
Plan-Workbench-Procedure
Overview of a Software Testing Process
Advantages of Following a Seven-Step Process
The seven-step process incorporates the best aspects of
many different processes
• Testing is consistent.
• Testing can be taught
• Test processes can be improved.
• Test processes are manageable
The Cost of Computer Testing
The cost of removing system defects prior to the system going
into production includes:
• Building the defect into the system

• Identifying the defect’s existence


• Correcting the defect

• Testing to determine that the defect has been removed


Defects uncovered after the system goes into operation
generate the following costs:
■Specifying and coding the defect into the system
■Detecting the problem within the application system
■Reporting the problem to the project manager and/or user
■Correcting the problems caused by the defect
■Operating the system until the defect is corrected
■Correcting the defect
■Testing to determine that the defect no longer exists
■Integrating the corrected program(s) into production
Quantifying the Cost of Removing Defects
The causes of the defects built into application systems
include
• Improperly interpreted requirements
• Users specify wrong requirements.
• Requirements are incorrectly recorded.
• Design specifications incorrect.
• Program specifications incorrect.
• Program coding error
• Program structural or instruction error
• Data entry error
• Testing error
• Error correction mistake
• Corrected condition causes another defect
Reducing the Cost of Testing
• The economics of computer testing clearly
demonstrate that the method to reduce the cost of
defects is to locate those defects as early in the
system development life cycle as possible.
• This involves beginning testing during the
requirements phase of the life cycle and continuing
testing throughout the life cycle. The objective of
testing would then become to detect the error as early
in the life cycle as possible
The Seven-Step Software Testing Process
A brief overview of the seven-step software testing process
follows
1. Organizing for testing
a. Define test scope. Determine which type of testing is to be
performed.
b. Organize the test team. Determine, based on the type of
testing to be performed, who should be assigned to the test team.
c. Assess development plan and status. This is a prerequisite to
building the test plan that will be used to evaluate the software
implementation plan. During this step, testers will challenge the
completeness and correctness of the development plan. Based on
the extensiveness and completeness of the project plan, the testers
will be in a position to estimate the amount of resources they will
need to test the implemented software solution
2. Developing the test plan
a. Perform risk analysis.
Identify the test risks.
b. Write the test plan.
Forming the plan for testing will follow the same pattern as
any software planning process.
The structure of all plans should be the same, but the content
will vary based on the degree of risk the testers perceive as
associated with the software being developed.
3)Verification testing
a. Test software requirements.
Testers, through verification, must determine that the
requirements are accurate and complete and that they do not
conflict with one another.
b. Test software design.
The testers are concerned that the design will in fact achieve
the objectives of the project as well as being effective and
efficient on the designated hardware.
c. Test software construction.
As the construction becomes more automated, less testing will
be required during this phase. However, if software is
constructed by a manual coding process, it is subject to error
and should be verified.
4. Validation testing

a. Perform validation testing. This involves the testing of


code in a dynamic state. The approach, methods, and tools
specified in the test plan will be used to validate that the
executable codes meets the stated software requirements and
the structural specifications of the design.

b. Record test results. Document the results achieved through


testing
5. Analyzing and reporting test results

a. Analyze the test results. Examine the results of testing to


determine where action is required because of variance
between “what is” and “what should be.”
b. Develop test reports. Test reporting is a continuous process. It
may be both oral and written. It is important that defects and
concerns be reported to the appropriate parties as early as
possible so that the can be corrected at the lowest possible
cost.
c. Test software changes. While this is shown as step 6 in the
context of performing maintenance after the software is
implemented, the concept is also applicable to changes
throughout the implementation process
6. Acceptance and operational testing

a. Perform acceptance testing. Acceptance testing enables


users of the software to evaluate the applicability and
usability of the software in performing thei
b. Test software installation. Once the test team has confirmed
that the software is ready for production, the ability to
execute that software in a production environment should be
tested.
7. Post-implementation analysis

• Test improvements can best be achieved by evaluating the


effectiveness of testing at the end of each software test
assignment.
• Although this assessment is primarily performed by the testers,
it should involve the developers, users of the software, and
quality assurance professionals if the function exists in the IT
organization.
Objectives of the Seven-Step Process
The following are the objectives of the seven-step
testing process
1. Organizing for testing
2. Developing the test plan
3. Verification testing.
4. Validation testing.
5. Analyzing and reporting test results
6. Acceptance and operational testing
7. Post-implementation analysis
Customizing the Seven-Step Process
To customize the process, the following steps must be
undertaken:

1. Understand the seven-step process


2. Customize for “who” tests.
3. Customize for the size and type of system to be tested.
4. Customize for “what” to test.
5. Customize for in-house developed and/or contracted software.
6. Customize for vocabulary
7. Integrate testing into the development process.
Managing the Seven-Step Process
The test manager should also manage by process, by facts, and by
results.
• Manage by process.
• Manage by facts. The test manager needs to develop metrics in order to
monitor quantitatively some key aspects of testing. This is considered the
dashboard for test management. The test manager should select between
three and eight key metrics to help manage testing. These metrics might
include:
■ Budget
■ Schedule
■ Requirements tested/not tested
■ Status of testing, including such things as requirements tested that were
implemented incorrectly
■ Requirements tested and not corrected as of a certain time period (for
example, 10 days)
■ Status of defects
• Manage by results. The process of testing is performed in
order to accomplish specific objectives. For many
organizations, the objectives will be the criteria for software
acceptance—for example, the following customer needs:
■ Transactions can be processed by someone with X skills.
■ Ninety-six percent of all input transactions are acceptable
for processing.
■ Changes to a product’s price can be made within 24 hours
Using the Tester’s Workbench with the Seven-Step Process

■ Overview. A brief description of the step. This will expand on


the overview given earlier in this chapter for each step.
■ Objective. A detailed description of the purpose of the step that
you can use to measure your progress at each step
■ Concerns. Specific challenges that testers will have to
overcome to complete the step effectively.
■ Workbench. A description of the process that the testers
should follow to complete the step.
■ Input. The documents, information, and skills needed to complete
the step.
■ Do procedures. Detailed, task-by-task procedures that testers
must follow to perform the step.
■ Check procedures. A checklist that testers use to verify that they
have performed a step correctly. These procedures will be related
to the test’s objective.
■ Output. The deliverables that the testers must produce at the
conclusion of each step.
■ Guidelines. Suggestions for performing each step more effectively
and for avoiding problems.
Workbench Skills

• If the people involved in the tester’s workbenches do not


possess the basic testing skills in the CBOK, one or more of
the following recommendations should be pursued to improve
testing skills:
■Attend a basic course on software testing
■Take the necessary courses from the Quality Assurance
Institute to prepare for the Certified Software Tester
Examination
2.Organizing for Testing
Workbench
• The workbench input is the current
documentation for the software system being
tested. Five tasks are listed, but some of the
tasks may have been completed prior to
starting the first task. The output from this step
is an organized test team, ready to begin
testing
Input:
• The following two inputs are required to
complete this step:
Project documentation. This includes the
project plan, objectives, scope, and defined
outputs.
Software development process. This includes
the procedures and standards to be followed
during the project’s implementation.
Do Procedures
The following five tasks are recommended for organizing the testing
process:
1. Appoint the test manager. If testing is part of the in-house
development effort, the project leader should determine who is
responsible for testing. If testing is performed by independent
testers, IT management should appoint the test manager.
2. Define the scope of testing. The test manager defines the scope of
testing, although all or part of the scope may be defined by testing
standards.
3. Appoint the test team. The test manager, project manager, or IT
management should appoint the test team.
4. Verify the development documentation. The test manager should
verify that adequate development documentation is available to
perform effective testing.
5. Validate the test estimate and project status process. The test
estimate can be developed by either the test manager or the project
manager.
Task 1: Appoint the Test Manager
• The test manager has the following
responsibilities:
1. Define the scope of testing
2. Appoint the test team
3. Define the testing process and the deliverables
produced
4. Write/oversee the test plan
5. Analyze test results and write the test report(s)
Task 2: Define the Scope of Testing
The scope of testing may be defined in the test mission. In
other words, if the testers are to ensure that the system meets the
user’s needs, the test manager would not have to define that in the
test scope. Likewise, if testers are to assist users in developing
and implementing an acceptance test plan, it would not have to be
defined in the scope of testing for a specific project.
If the test mission is not specific about testing scope
and/or there are specific objectives to be accomplished from
testing, the test manager should define that scope. It is important
to understand the scope of testing prior to developing the test
plan.
Task 3: Appoint the Test Team
Extensive “desk checking” by the individual who developed the work is not
a cost-effective testing method. The disadvantages of a person checking his
or her own work include the following:
■ Misunderstandings will not be detected, because the checker will assume
what he or she was told is correct.
■■ Improper use of the development process may not be detected, because the
individual may not understand the process.
■■ The individual may be “blinded” into accepting erroneous test results
because he or she falls into the same trap during testing that led to the
introduction of the defect in the first place.
■■ The IT staff is optimistic in their ability to do defect-free work and thus
sometimes underestimate the need for extensive testing.
■■ Without a formal division between software development and software
testing, an individual may be tempted to improve the system structure and
documentation rather than allocate that time and effort to the testing
process.
• Internal Team Approach

• External Team Approach


•Non-IT Team Approach

•Combination Team Approach


Task 4: Verify the Development
Documentation
■■ Assist in planning and managing resources
■■ Help to plan and implement testing procedures
■■ Help to transfer knowledge of software development
throughout the life cycle
■■ Promote common understanding and expectations about
the system within the organization and—if the software is
purchased—between the buyer and seller
■■ Define what is expected and verify that is what is delivered
■■ Provide managers with technical documents to review at
the significant development milestones, to determine that
requirements have been met and that resources should
continue to be expended
Development Phases
1. Initiation: Feasibility studies, cost/benefit analyses, and the
documentation prepared in this phase are determined by the
organization’s procedures and practices.
2. Development: The requirements for the software are determined and
then the software is defined, specified, programmed, and tested. The
following documentation is prepared during the four stages of this
phase:

❖ Definition: During the definition stage, the requirements for the


software and documentation are determined. The functional
requirements document and the data requirements document should be
prepared.
❖ Design: During this stage, the design alternatives, specific requirements,
and functions to be performed are analyzed and a design is specified.
Documents that may be prepared include the system/subsystem
specification, program specification, database specification, and test
plan.
❖ Programming. During the programming stage, the software is
coded and debugged. Documents that should be prepared during
this stage include the user manual, operations manual, program
maintenance manual, and test plan.
❖ Testing. During the test stage, the software and related
documentation are evaluated and the test analysis report is
prepared

3.Operation: During the operation phase, the


software is maintained, evaluated, and changed
as additional requirements are identified.
The 14 documents needed for system
development, maintenance, and operations
are listed:
1. Project request.
2. Feasibility study
3. Cost/benefit analysis.
4. Software summary
5. Functional requirements.
6. Data requirements
7. System/subsystem specifications
8. Program specification.
9. Database specifications.
10.User manual.
11. Operations manual.
12. Program maintenance manual.
13. Test plan.
14. Test analysis report.
Measuring Project Documentation
Needs
Originality required. The uniqueness of the application within the
organization.
Degree of generality. The amount of rigidity associated with the
application and the need to handle a variety of situations during
processing.
Span of operation. The percentage of total corporate activities
affected by the system.
Change in scope and objective. The frequency of expected
change in requirements during the life of the system.
Equipment complexity. The sophistication of the hardware and
communications lines needed to support the application.
Personnel assigned. The number of people involved in developing
and maintaining the application system.
Developmental cost. The total dollars required to develop the
application.
Criticality. The importance of the application system
to the organization.
Average response time to program change. The
average amount of time available to install a change to
the application system.
Average response time to data input. The average
amount of time available to process an application
transaction.
Programming language. The language used to
develop the application.
Concurrent software development. Other
applications and support systems that need to be
developed concurrently to fulfill the total mission.
Determining What Documents Must Be
Produced
Determining the Completeness of
Individual Documents
• The 12 criteria used to evaluate the completeness of a document
are as follows:
1. Content.
2. Audience.
3. Redundancy.
4. Flexibility
5. Size
6. Combining and expanding document types.
7. Format
8. Content sequence.
9. Documenting multiple programs or multiple files
10. Section titles.
11. Flowcharts and decision tables
12. Forms
Determining Documentation Timeliness
❖ Use the documentation to change the application
❖ Compare the code with the documentation.
❖ Confirm documentation timeliness with documentation
preparers.
Is this documentation 100 percent representative of
the application in operation?
Is the documentation changed every time that a system
change is made?
Do the individuals who change the system rely on the
documentation as correct?
❖ Confirm the timeliness of documentation with end users
Task 5: Validate the Test Estimate and Project
Status Reporting Process
There are three general concerns regarding available time and
resources for testing:
Inaccurate estimate: The estimate for resources in time
will not be sufficient to complete the project as specified.
Inadequate development process: The tools and
procedures will not enable developers to complete the
project within the time constraints.
Incorrect status reporting: The project leaders will not
know the correct status of the project during early
developmental stages and thus cannot take the necessary
corrective action in time to meet the scheduled completion
date.
Validating the Test Estimate
• Some reasons for not obtaining a good estimate include:
1. A lack of understanding of the process of software development
and maintenance
2. A lack of understanding of the effects of various technical and
management constraints
3. A view that each project is unique, which inhibits project-to-project
comparisons
4. A lack of historic data against which the model can be checked
5. A lack of historic data for calibration
6. An inadequate definition of the estimate’s objective (whether it is
intended as a project management tool or purely to aid in making a
go-ahead decision) and at what stage the estimate is required so
that inputs and outputs can be chosen appropriately
7. Inadequate specifications of the scope of the estimate (what is
included and what is excluded)
8. An inadequate understanding of the premises on which the
estimate is based
Strategies for Software Cost Estimating
• There are five commonly used methods for estimating the cost
of developing and maintaining a software system:
1. Seat-of-the-pants method.
2. Constraint method.
3. Percentage-of-hardware method.
4. Simulation method.
5. Parametric modeling method
Parametric Models
• Parametric models can be divided into three classes:
regression, heuristic, and phenomenological.
1. Regression models: The quantity to be estimated is mathematically
related to a set of input parameters. The parameters of the hypothesized
relationship are arrived at by statistical analysis and curve fitting on an
appropriate historical database.
2. Heuristic models: In a heuristic model, observation and interpretation of
historical data are combined with supposition and experience.
Relationships between variables are stated without justification. The
advantage of heuristic models is that they need not wait for formal
relationships to be established that describe how the cost-driving
variables are related.
3.Phenomenological models: The
phenomenological model is based on a
hypothesis that the software development
process can be explained in terms of some
more widely applicable process or idea.
• Most of the estimating models follow a similar
pattern, based on the following six steps.
1. Estimate the software size.
2. Convert the size estimate (function points or lines
of code) to an estimate of the person-hours
needed to complete the test task.
3. Adjust the estimate for special project
characteristics
4. Divide the total estimate into the different project
phases.
5. Estimate the computer time and non-technical
labor costs.
6. Sum the costs.
Testing the Validity of the Software Cost
Estimate
1. Validate the reasonableness of the
estimating model.
2. Validate that the model includes all the
needed factors.
3. Verify the correctness of the
cost-estimating model estimate.
Validate the Reasonableness of the Estimating
Model
Validate That the Model Includes All the
Needed Factors
• Factors that influence the software system include:
1. Size of the software
2. Percentage of the design and/or code that is new
3. Complexity of the software
4. Difficulty of implementing the software requirements
Operating systems
Self-contained real-time projects
Standalone, non–real-time applications
Modifications to an existing software system
5.Quality
6. Languages to be used.
7. Security classification
8. Volatility of the requirement.
Amount of change expected in the requirement
Amount of detail omitted from the requirement
specification
Concurrent development of associated hardware,
causing changes in the software specification
Unspecified target hardware
Organization-dependent factors
include the following:
• Project schedule
• Personnel.
Technical competence
Non-technical staff
• Development environment.
• Development machine.
• Availability of associated software and
hardware.
• Software tools and techniques used during
system design and development.
• Resources not directly attributable to technical
aspects of the project.
• Labor rates.
• Inflation.
Verify the Correctness of the
Cost-Estimating Model Estimate
1. Recalculate the estimate.
Validate the input was entered correctly
Validate the input was reasonable
Validate the mathematical computation was
performed correctly
2.Compare produced estimate to similar projects.
3.The prudent person test.
4.Redundancy in software cost estimating.
Organization-developed estimating models
Estimating models included in system
development methodologies
Software packages for developing software
estimates
Function points in estimating software costs
Calculating the Project Status Using
a Point System
• Overview of the Point Accumulation Tracking
System
• Typical Methods of Measuring Performance
• Using the Point System
• Extensions
• Rolling Baseline
• Reports
3.Developing the Test Plan
Developing the Test Plan
• The concerns testers face in ensuring that the test plan will be
complete include the following:
❖ Not enough training: The majority of IT personnel have not
been formally trained in testing, and only about half of full-time
independent testing personnel have been trained in testing
techniques. This causes a great deal of misunderstanding and
misapplication of testing techniques.
❖ Us-versus-them mentality: This common problem arises when
developers and testers are on opposite sides of the testing issue.
Often, the political infighting takes up energy, sidetracks the
project, and negatively impacts relationships.
❖ Lack of testing tools: IT management may consider testing
tools to be a luxury. Manual testing can be an overwhelming
task. Although more than just tools are needed, trying to test
effectively without tools is like trying to dig a trench with a
spoon.
❖ Lack of management understanding/support of testing:
If support for testing does not come from the top, staff will
not take the job seriously and testers’ morale will suffer.
Management support goes beyond financial provisions;
management must also make the tough calls to deliver the
software on time with defects or take a little longer and do
the job right.
❖ Lack of customer and user involvement: Users and
customers may be shut out of the testing process, or
perhaps they don’t want to be involved. Users and
customers play one of the most critical roles in testing:
making sure the software works from a business
perspective.
❖ Not enough time for testing: This is common complaint.
The challenge is to prioritize the plan to test the right
things in the given time.
❖ Over-reliance on independent testers: Sometimes called the
“throw it over the wall” syndrome. Developers know that
independent testers will check their work, so they focus on
coding and let the testers do the testing. Unfortunately, this
results in higher defect levels and longer testing times.
❖ Rapid change: In some technologies, especially rapid
application development (RAD), the software is created and/or
modified faster than the testers can test it. This highlights the
need for automation, but also for version and release
management
❖ Testers are in a lose-lose situation: On the one hand, if the
testers report too many defects, they are blamed for delaying the
project. Conversely, if the testers do not find the critical defects,
they are blamed for poor quality.
❖ Having to say no: The single toughest dilemma for testers is to
have to say, “No, the software is not ready for production.”
Nobody on the project likes to hear that, and frequently, testers
succumb to the pressures of schedule and cost.
Do Procedures
• The following six tasks should be completed
during the execution of this step:
1. Profile the software project.
2. Understand the software project’s risks.
3. Select a testing technique.
4. Plan unit testing and analysis.
5. Build the test plan.
6. Inspect the test plan
Task 1: Profile the Software Project
This task can be divided into the following two
subtasks:
1. Conduct a walkthrough of the customer/user
areas.
2. Develop a profile of the software project.
Conducting a Walkthrough of the Customer/User
Area
Developing a Profile of the Software Project
The following is the profile information that is helpful in preparing for
test planning:
1. Project objectives.
2. Development process
3. Customer/users
4. Project deliverables
5. Cost/schedule
6. Project constraints
7. Developmental staff competency
8. Legal/industry issues.
9. Implementation technology
10. Databases built/used
11. Interfaces to other systems.
12. Project statistics
Task 2: Understand the Project Risks
1. Reliability :
The level of accuracy and completeness expected in the
operational environment is established.
Data integrity controls are implemented in accordance with
the design.
Manual, regression, and functional tests are performed to
ensure the data integrity controls work.
The completeness of the system installation is verified.
The accuracy requirements are maintained as the applications
are updated
2.Authorization :
The rules governing the authorization of transactions are defined.
The application is designed to identify and enforce the authorization
rules.
The application implements the authorization rules.
Unauthorized data changes are prohibited during the installation
process.
The method and rules for authorization are preserved during
maintenance.
3.File integrity :
Requirements for file integrity are defined.
The design provides for the controls to ensure the integrity of
the file.
The specified file integrity controls are implemented.
The file integrity functions are tested to ensure they perform
properly.
The integrity of the files is preserved during the maintenance
phase.
4.Audit trail
The requirements to reconstruct processing are defined.
The audit trail requirements are incorporated into the system.
The audit trail functions are tested to ensure the appropriate data is
saved.
The audit trail of installation events is recorded.
The audit trail requirements are updated during systems maintenance.
5.Continuity-of-processing
The impact of each system failure is defined.
The contingency plan and procedures have been written.
Recovery testing verifies that the contingency plan functions
properly.
The integrity of the previous systems is ensured until the integrity of
the new system is verified.
The contingency plan is updated and tested as system requirements
change.
6.Service level :
The desired service level for all aspects of the system is
defined.
The method to achieve the predefined service levels is
incorporated into the system design.
The programs and manual systems are designed to achieve the
specified service level.
Stress testing is conducted to ensure that the system can
achieve the desired service level when both normal and above
normal volumes of data are processed.
A fail-safe plan is used during installation to ensure service is
not disrupted.
The predefined service level is preserved as the system is
maintained.
7.Access control
Access to the system is defined.
The procedures to enforce the access rules are designed.
The defined security procedures are implemented.
Compliance tests are utilized to ensure that the security procedures function in a
production environment.
Access to the computer resources is controlled during installation.
The procedures controlling access are preserved as the system is updated.
8.Methodology
The system requirements are defined and documented in compliance with the
development methodology.
The system design is executed in accordance with the design methodology.
The programs are constructed and documented in compliance with the
programming methodology.
Testing is performed in compliance with the test methodology.
The integration of the application system in a production environment complies
with the installation methodology.
System maintenance is performed in compliance with the maintenance
methodology.
9.Correctness
• The user has fully defined the functional specifications.
• The developed design conforms to the user requirements.
• The developed program conforms to the system design specifications.
• Functional testing ensures that the requirements are properly implemented.
• The proper programs and data are placed into production.
• The user-defined requirement changes are properly implemented in the
• operational system.
10.Ease-of-use
• The usability specifications for the application system are defined.
• The system design attempts to optimize the usability of the implemented
requirements.
• The program optimizes ease of use by conforming to the design.
• The relationship between the manual and automated system is tested to
ensure the application is easy to use.
• The usability instructions are properly prepared and disseminated to the
appropriate individuals.
• As the system is maintained, its ease of use is preserved.
The test team should investigate the system characteristics to
evaluate the potential magnitude of the risk, as follows:

1. Define what meeting project objectives means.


2. Understand the core business areas and processes.
3. Assess the severity of potential failures.
4. Identify the system components:
5. Identify, prioritize, and estimate the test resources required.
6. Develop validation strategies and testing plans for all converted or replaced
systems and their components.
7. Identify and acquire automated test tools and write test scripts.
8. Define requirements for the test facility.
9. Address implementation schedule issues.
10. Address interface and data exchange issues.
11. Evaluate contingency plans.
12. Identify vulnerable parts of the system and processes operating outside the
information resource management area.
Task 3: Select a Testing Technique
Structural System Testing Techniques
• Stress testing
• Execution testing
• Recovery testing
• Operations testing
• Compliance testing
• Security testing
• Stress Testing
Stress testing is designed to determine whether the system can

function when subjected to larger volumes than normally would


be expected. Areas stressed include input trans-actions, internal
tables, disk space, output, communications, and computer
capacity. If the application functions adequately under stress
testing, testers can assume that it will function properly with
normal volumes of work.
• Execution Testing

Objectives Specific objectives of execution testing include


• Determining the performance of the system structure
• Verifying the optimum use of hardware and software
• Determining response time to online requests
• Determining transaction processing turnaround time
• Recovery Testing
Recovery is the ability to restart operations after the
integrity of the application has been lost. The process
normally involves reverting to a point where the integrity
of the system is known, and then reprocessing
transactions.
Specific objectives of recovery testing include the
following:
■ Adequate backup data is preserved.
■ Backup data is stored in a secure location.
■ Recovery procedures are documented.
■Recovery personnel have been assigned and trained.
■Recovery tools have been developed transactions up until
the point of failure.
• Operations Testing
After an application has been tested, it is integrated
into the operating environment.
At this point, the application will be executed using
the normal operations staff, procedures, and
documentation.
Operations testing is designed to verify prior to
production that the operating procedures and staff can
properly execute the application.
Objectives Specific objectives of operations testing
include
■ Determining the completeness of computer operator
documentation
■ Ensuring that the necessary support mechanisms,
such as job control language, have been prepared and
function properly
■ Evaluating the completeness of operator training
• Compliance Testing
Objectives Specific objectives of compliance testing
include the following:
■ Determining that systems development and
maintenance methodologies are followed
■ Ensuring compliance to departmental standards,
procedures, and guidelines
■Evaluating the completeness and reasonableness of
system documentation
Security Testing
The level of security required depends on the risks associated with compromise or
loss of information. Security testing is designed to evaluate the adequacy of
protective procedures and countermeasures.
Objectives Specific objectives of security testing include the following:
■ Determining that adequate attention has been devoted to identifying security risks
■Determining that a realistic definition and enforcement of access to the system has
been implemented
■Determining that sufficient expertise exists to perform adequate security testing
■Conducting reasonable tests to ensure that the implemented security measures
function properly
Functional System Testing Techniques
• Requirements testing Regression testing Error-handling testing
Manual-support testing Inter systems testing Control testing
Parallel testing
Requirements Testing
Regression Testing
Error Handling Testing
Manual Support Testing
Intersystem testing
Control Testing
Parallel Testing
Task 4: Plan Unit Testing and Analysis

• Functional Testing and Analysis


Functional Analysis
Functional analysis seeks to verify, without execution, that the
code faithfully implements the specification. Various
approaches are possible. In the proof-of-correctness approach,
a formal proof is constructed to verify that a program correctly
implements its intended function. In the safety-analysis
approach, potentially dangerous behavior is identified and
steps are taken to ensure such behavior is never manifested.
• Functional Testing
The goal is to test for each software feature of the
specified behavior, including the input domains, the
output domains, categories of inputs that should
receive equivalent processing, and the processing
functions themselves.
Structural Testing and Analysis

• Structural Analysis
In structural analysis, programs are analyzed without
being executed. The techniques resemble those used
in compiler construction. The goal here is to identify
fault-prone code, to discover anomalous
circumstances, and to generate test data to cover
specific characteristics of the program’s structure.
• Structural Testing
Structural testing is a dynamic technique in which test
data selection and evaluation are driven by the goal of
covering various characteristics of the code during
testing. Assessing such coverage involves the
instrumentation of the code to keep track of which
characteristics of the program text are actually
exercised during testing
• Statement testing
• Branch Testing
• Conditional Testing
• Expression Testing
• Path Testing
Error-Oriented Testing and Analysis
• Error-based testing seeks to demonstrate that certain errors
have not been committed in the programming process.
Error-based testing can be driven by histories of programmer
errors, measures of software complexity, knowledge of
error-prone syntactic constructs, or even error guessing.
• Fault Estimation
• Domain Testing
• Perturbation testing.
Task 5: Build the Test Plan

The development of an effective test plan involves the following


four steps:
1. Set the test objectives.
2. Develop a test matrix.
3. Define test administration.
4. Write the test plan
Set the test objectives.

• Set objectives to minimize the project risks


• Brainstorm to identify project objectives
• Relate objectives to the testing policy, if established
Developing a Test Matrix oping a Test
Matrix
Defining Test Administration
• The administrative component of the test plan identifies the
schedule, milestones, and resources needed to execute the test
plan as illustrated in the test matrix. This cannot be undertaken
until the test matrix has been completed. Prior to developing
the test plan, the test team has to be organized. This initial test
team is responsible for developing the test plan and then
defining the administrative resources needed to complete the
plan. Thus, part of the plan will be executed as the plan is
being developed; that part is the creation of the test plan,
which itself consumes resources.
Writing the Test Plan

• The test plan can be as formal or informal a


document as the organization’s culture dictates.
When the test team has completed Work Papers
8-1 through 8-10, they have completed the test
plan. The test plan can either be the ten work
papers or the information on those work papers
transcribed to a more formal test plan. Generally, if
the test team is small, the work papers are more
than adequate. As the test team grows, it is better
to formalize the test plan.
Task 6: Inspect the Test Plan
• Inspection Concerns
• Inspections may be perceived to delay the start of
testing
• There is resistance to accepting the inspection role.
• Space may be difficult to obtain for conducting
inspections.
• Change implementers may resent having their
work inspected prior to testing
• Inspection results may affect individual
performance appraisal
• Products/Deliverables to Inspect
• Each software project team determines the
products to be inspected, unless specific
inspections are mandated by the project plan.
Consider inspecting the following products:
• Project requirements specifications
• Software rework/maintenance documents Updated technical
documentation
• Changed source code
• Test plans
• User documentation (including online help)
• Formal Inspection Roles
The selection of the inspectors is critical to the
effectiveness of the process. It is important to
include appropriate personnel from all
impacted functional areas and to carefully
assign the predominant roles and
responsibilities (project, operations, external
testing, etc.). There should never be fewer than
three inspection participants but not more than
five.
• Moderator
The moderator coordinates the inspection
process and oversees any necessary followup
tasks
• Reader
• The reader is responsible for setting the pace of the
inspection.
• Specifically, the reader:
• Is not also the moderator or author
• Has a thorough familiarity with the material to be
inspected
• Presents the product objectively
• Paraphrases or reads the product material line by
line or paragraph by paragraph, pacing for clarity
and comprehension
• Recorder
• The recorder is responsible for listing defects
and summarizing the inspection results. He or
she must have ample time to note each defect
because this is the only information that the
author will have to find and correct the defect.
• Author
• The author is the originator of the product being inspected.
• Specifically, the author:
• Initiates the inspection process by informing the moderator
that the product is ready for inspection
• May also act as an inspector during the inspection meeting
• Assists the moderator in selecting the inspection team
• Meets all entry criteria outlined in the appropriate inspection
package cover sheet
• Provides an overview of the material prior to the inspection for
clarification, if requested
• Clarifies inspection material during the process, if requested
• Corrects the defects and presents finished rework to the
moderator for sign-off
• Forwards all materials required for the inspection to the
moderator as indicated in the entry criteria
• Inspectors
• The inspectors should be trained staff who can effectively
contribute to meeting objectives of the inspection. The
moderator, reader, and recorder may also be inspectors.

Specifically, the inspectors


• Must prepare for the inspection by carefully reviewing
and understanding the material
• Maintain objectivity toward the product
• Record all preparation time
• Present potential defects and problems encountered
before and during the inspection meeting
Formal Inspection Defect Classification
• Each defect should be classified as follows:
• By origin. Indicates the development phase in
which the defect was generated (requirements,
design, program, etc.).
• By type. Indicates the cause of the defect. For
example, code defects could be errors in
procedural logic, or code that does not satisfy
requirements or deviates from standards.
• By class. Defects should be classified as missing,
wrong, or extra, as described previously.
• Inspection Procedures
The formal inspection process is segmented into the
following five subtasks, each of which is distinctive
and essential to the successful outcome of the overall
process:
1. Planning and organizing
2. Overview session (optional)
3. Individual preparation
4. Inspection meeting
5. Rework and follow-up

You might also like