0% found this document useful (0 votes)
30 views

OOSE Unit-5

The document discusses mapping object-oriented models to code during implementation, including mapping associations to collections, mapping contracts to exceptions, and mapping object models to a persistent storage schema. It covers testing concepts and activities like component inspection, unit testing, and integration testing.

Uploaded by

sravu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

OOSE Unit-5

The document discusses mapping object-oriented models to code during implementation, including mapping associations to collections, mapping contracts to exceptions, and mapping object models to a persistent storage schema. It covers testing concepts and activities like component inspection, unit testing, and integration testing.

Uploaded by

sravu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

GITAM School of Technology

19EAI334: OOSE Based Application


Development
Unit 5 Syllabus
Implementation: Coding & Testing

• Mapping Models to Code


• Mapping Concepts: Model Transformation, Refactoring, Forward Engineering, Reverse
Engineering, Transformation Principles.
• Mapping Activities : Optimizing the Object Design Model, Mapping Associations to
Collections, Mapping Contracts to Exceptions, Mapping Object Models to a Persistent
Storage Schema.
• Testing: An Overview of Testing. Testing Concepts: Faults, Erroneous States, and
Failures, Test Cases, Test Stubs and Drivers, Corrections. Testing Activities: Component
Inspection, Usability Testing, Unit Testing, Integration Testing, System Testing.
• Managing Testing: Planning Testing activities, Documenting Testing, Assigning
Responsibilities, Regression Testing, Automating Testing.
• Documenting Architecture: Architectural views: logical, deployment, security, data,
behavioural.
• If the design pattern selection and the specification of class interfaces were done
carefully, most design issues should now be resolved. We could implement a system
that realizes the use cases specified during requirements elicitation and system
design.
• However, as developers start putting together the individual subsystems developed in
this way, they are confronted with many integration problems. Different developers
have probably handled contract violations differently. Undocumented parameters
may have been added to the API to address a requirement change. Additional
attributes have possibly been added to the object model, but are not handled by the
persistent management system, possibly because of a miscommunication.

• As the delivery pressure increases, addressing these problems results in additional


improvised code changes and workarounds that eventually yield to the degradation
of the system. The resulting code would have little resemblance to our original
design and would be difficult to understand.
• Developers try to improve modularity and performance
• Developers need to transform associations into references, because programming
languages do not support associations
• If the programming language does not support contracts, the developer needs to
write code for detecting and handling contract violations
• Developers need to revise the interface specification whenever the client comes up
with new requirements.

In each of these transformations, small errors usually creep in, resulting in bugs and
test failures. A set of techniques / transformations can reduce the number of such
errors.
An Overview of Mapping
• Mappings are transformations that aim at improving one aspect of the model
while preserving functionality.
• These transformations occur during numerous object design and implementation
activities.
• Activities:
• optimizing the class model
• mapping associations to collections
• mapping operation contracts to exceptions
• mapping the class model to a storage schema.
Mapping Concepts
We distinguish four types of transformations
• Model transformations operate on object models.
An example is the conversion of a simple attribute (e.g., an address represented as a string) to a
class (e.g., a class with street address, zip code, city, state, and country attributes).
• Refactoring are transformations that operate on source code. They are similar to object model
transformations in that they improve a single aspect of the system without changing its
functionality.
• Forward engineering produces a source code template that corresponds to an object model .
• Reverse engineering produces a model that corresponds to source code. This transformation is
used when the design of the system has been lost and must be recovered from the source code
Model Transformation
• A model transformation is applied to an
object model and results in another object
model.
• The purpose of object model transformation
is to simplify or optimize the original model,
bringing it into closer compliance with all
requirements in the specification.
• A transformation may add, remove, or
rename classes, operations, associations, or
attributes.
• A transformation can also add information to
the model or remove information from it.
Refactoring
• A refactoring is a transformation of the source code that improves its readability or
modifiability without changing the behavior of the system.
• Refactoring aims at improving the design of a working system by focusing on a
specific field or method of a class.
• To ensure that the refactoring does not change the behavior of the system, the
refactoring is done in small incremental steps that are interleaved with tests.
• For example, the object model transformation of Figure 10-2 corresponds to a
sequence of three refactoring’s.
• The first one, Pull Up Field, moves the email field from the subclasses to the superclass
User.
• The second one, Pull Up Constructor Body, moves the initialization code from the
subclasses to the superclass.
• The third and final one, Pull Up Method, moves the methods manipulating the email
field from the subclasses to the superclass
Forward Engineering
Forward engineering is applied to a
set of model elements and results in
a set of corresponding source code
statements, such as a class
declaration, a Java expression, or a
database schema.
The purpose of forward engineering
is to maintain a strong
correspondence between the object
design model and the code, and to
reduce the number of errors
introduced during implementation,
thereby decreasing implementation
effort.
Reverse Engineering
• Reverse engineering is applied to a set of source code elements and results in a set
of model elements.
• The purpose of this type of transformation is to recreate the model for an existing
system, either because the model was lost or never created, or because it became
out of sync with the source code.
• Reverse engineering is essentially an inverse transformation of forward
engineering. Reverse engineering creates a UML class for each class declaration
statement, adds an attribute for each field, and adds an operation for each method
Transformation Principles
• A transformation aims at improving the design of the system with respect to some
criterion.
• To avoid introducing new errors, all transformations should follow these
principles:
• Each transformation must address a single criteria.
• Each transformation must be local.
• Each transformation must be applied in isolation to other changes.
• Each transformation must be followed by a validation step.
Mapping Activities
• Optimizing the Object Design Model
• Mapping Associations to Collections
• Mapping Contracts to Exceptions
• Mapping Object Models to a Persistent Storage Schema
Optimizing the Object Design Model

• We describe four simple but common optimizations:

• adding associations to optimize access paths,

• collapsing objects into attributes,

• delaying expensive computations, and

• caching the results of expensive computations.

When applying optimizations, developers must strike a balance between efficiency


and clarity. Optimizations increase the efficiency of the system but also the
complexity of the models, making it more difficult to understand the system.
Optimizing access paths

• Repeated association traversals. To identify inefficient access paths,


you should identify operations that are invoked often and examine,
with the help of a sequence diagram, the subset of these operations that
requires multiple association traversal.
• “Many” associations. For associations with “many” multiplicity, you
should try to decrease the search time by reducing the “many” to “one.”
• Misplaced attributes. Another source of inefficient system
performance is excessive modeling.
• After folding several attributes, some classes may not be needed
anymore and can simply removed from the model
Collapsing objects: Turning objects into attributes

• After the object model is restructured and optimized a couple of times,


some of its classes may have few attributes or behaviors left.
• Such classes, when associated only with one other class, can be
collapsed into an attribute, thus reducing the overall complexity of the
model.
Delaying expensive computations
Specific objects are expensive to create. However, their creation can
often be delayed until their actual content is needed.
Caching the result of expensive computations

• Some methods are called many times, but their results are based on values that do
not change or change only infrequently.
• Reducing the number of computations required by these methods substantially
improve overall response time.
• In such cases, the result of the computation should be cached as a private attribute.
Mapping Associations to Collections

• Associations are UML concepts that denote collections of bidirectional links


between two or more objects.
• During object design, we realize associations in terms of references, taking into
account the multiplicity of the associations and their direction.

 Unidirectional one-to-one associations.


 Bidirectional one-to-one associations.
 One-to-many associations.
 Many-to-many associations.
 Qualified associations.
 Association Class
Unidirectional one-to-one associations.

• The simplest association is a


unidirectional one-to-one association.

For example in ARENA, an Advertiser has a


one-to-one association with an Account
object that tracks all the charges accrued
from displaying AdvertisementBanner
This association is unidirectional, as the
Advertiser calls the operations of the
Account object, but the Account never
invokes operations of the Advertiser. In this
case, we map this association to code using
a reference from the Advertiser to the
Account. That is, we add a field to
Advertiser named account of type Account.
Bidirectional one-to-one associations.
• The direction of an association often
changes during the development of the
system. Unidirectional associations are
simple to realize.
• Bidirectional associations are more
complex and introduce mutual
dependencies among classes.
• Assume that we modify the Account
class so that the display name of the
Account is computed from the name of
the Advertiser. In this case, an Account
needs to access its corresponding
Advertiser object. Consequently, the
association between these two objects
must be bidirectional.
One-to-many associations

• One-to-many associations cannot be


realized using a single reference or a
pair of references. Instead, we realize
the “many” part using a collection of
references.
• For example, assume that an Advertiser
can have several Accounts to track the
expenses accrued by
AdvertisementBanners for different
products. In this case, the Advertiser
object has a one-to-many association
with the Account class
Many-to-many associations

• In this case, both end classes have


fields that are collections of
references and operations to keep
these collections consistent.
• For example, the Tournament
class of ARENA has an ordered
many-to-many association with
the Player class.
Qualified associations.
• Qualified associations are used to reduce the multiplicity of one “many” side in a
one-to-many or a many-to-many association.
Associations classes.
• In UML, we use an association class to hold the attributes and operations of an
association. For example, we can represent the Statistics for a Player within a
Tournament as an association class, which holds statistics counters for each
Player/Tournament combination
Mapping Contracts to Exceptions
• A simple mapping would be to treat each operation in the contract individually
and to add code within the method body to check the preconditions,
postconditions, and invariants relevant to the operation:
• Checking preconditions. Preconditions should be checked at the beginning of the
method, before any processing is done.
• Checking postconditions. Postconditions should be checked at the end of the
method, after all the work has been accomplished and the state changes are
finalized.
• Checking invariants. When treating each operation contract individually,
invariants are checked at the same time as postconditions.
• Dealing with inheritance. The checking code for preconditions and
postconditions should be encapsulated into separate methods that can be called
from subclasses.
Mapping Object Models to a Persistent Storage Schema
• Persistent objects are usually treated like all other objects. However, object-oriented
programming languages do not usually provide an efficient way to store persistent
objects. In this case, we need to map persistent objects to a data structure that can
be stored by the persistent data management system decided during system design,
in most cases, either a database or a set of files.
• For object-oriented databases, no transformations need be done, since there is a
one-to-one mapping between classes in the object model and classes in the object-
oriented database. However, for relational databases and flat files, we need to map
the object model to a storage schema and provide an infrastructure for converting
from and to persistent storage.
• Mapping an object model to a relational database using Java and database schemas.
• A schema is a description of the data, that is, a meta-model for data. In UML, class
diagrams are used to describe the set of valid instances that can be created by the
source code.
• Relational databases store both the schema and
the data.
• A table is structured in columns, each of which
represents an attribute.
• A primary key of a table is a set of attributes
whose values uniquely identify the data records in
a table.
• Sets of attributes that could be used as a primary
key are called candidate keys. Only the actual
candidate key that is used in the application to
identify data records is the primary key.
• A foreign key is an attribute (or a set of
attributes) that references the primary key of
another table.
Mapping classes and attributes
• When mapping the persistent objects to relational schema, we focus first on
the classes and their attributes
• When mapping attributes, we need to select a data type for the database
column. For primitive types, the correspondence between the programming
language type and the database type is usually trivial (e.g., the Java Date type
maps to the datetime type in SQL)
• Next, we focus on the primary key. There are two options when selecting a
primary key for the table.
• The first option is to identify a set of class attributes that uniquely
identifies the object.
• The second option is to add a unique identifier attribute that we generate.
• For example, in Figure 10-16, we use the login name of
the user as a primary key. Although this approach is
intuitive, it has several drawbacks. If the value of the
login attribute changes, we need to update all tables in
which the user login name occurs as a foreign key. Also,
selecting attributes from the application domain can
make it difficult to change the database schema when the
application domain changes. For example, in the future,
we could use a single table to store users from different
Arenas. As login names are unique only within a single
Arena, we would need to add the name of the Arena in
the primary key.
• The second option is to use an arbitrarily unique
identifier (id) attribute as a primary key. We generate the
id attribute for each object and can guarantee that it is
unique and will not change. Some database management
systems provide features for automatically generating
ids.
• This results in a more robust schema and primary and
foreign keys that consist of one column.
Mapping associations
• The mapping of associations to a database schema depends on the
multiplicity of the association. One-to-one and one-to-many
associations are implemented as a so-called buried association.
• Buried associations. Associations with multiplicity one can be
implemented using a foreign key. For one-to-many associations, we
add a foreign key to the table representing the class on the “many”
end.
Separate table
• Many-to-many associations are implemented using a separate two-
column table with foreign keys for both classes of the association
• Each row in the association table corresponds to a link between two
instances.
Mapping inheritance relationships
• Relational databases do not directly support
inheritance, but there are two main options for
mapping an inheritance relationship to a
database schema.

• Vertical mapping: Given an inheritance


relationship, we map the superclass and
subclasses to individual tables.
• Horizontal mapping: Another way to realize
inheritance is to push the attributes of the
superclass down into the subclasses,
effectively removing the need for a superclass
table
TESTING
An Overview of Testing
• Testing is the process of finding differences between the expected behavior
specified by system models and the observed behavior of the implemented system.
• Testing as the systematic attempt to find faults in a planned way in the
implemented software
• Reliability is a measure of success with which the observed behavior of a system
conforms to the specification of its behavior.
• Software reliability is the probability that a software system will not cause
system failure for a specified time under specified conditions.
• Failure is any deviation of the observed behavior from the specified behavior.
• An erroneous state (also called an error) means the system is in a state such that
further processing by the system will lead to a failure, which then causes the
system to deviate from its intended behavior.
• A fault, also called “defect” or “bug,”
There are many techniques for increasing the reliability of a software system:
• Fault avoidance's :techniques try to detect faults statically, that is, without relying on the
execution of any of the system models
• Fault detection :techniques, such as debugging and testing, are uncontrolled and
controlled experiments, respectively, used during the development process to identify
erroneous states and find the underlying faults before releasing the system.
• Fault tolerance: techniques assume that a system can be released with faults and that
system failures can be dealt with by recovering from them at runtime.
• A review is the manual inspection of parts or all aspects of the system without actually
executing the system. There are two types of reviews:
• In a code walkthrough, the developer informally presents the API (Application
Programmer Interface), the code, and associated documentation of the component to the
review team. An inspection
• An inspection is similar to a walkthrough, but the presentation of the component is
formal.
• Debugging assumes that faults can be found by starting from an unplanned failure.
• Testing is a fault detection technique that tries to create failures or erroneous states in a
planned way. This allows the developer to detect failures in the system before it is released to
the customer.
Figure 11-1 depicts an overview of testing activities:
• Test planning allocates resources and schedules the testing. This activity should occur early
in the development phase so that sufficient time and skill is dedicated to testing. For example,
developers can design test cases as soon as the models they validate become stable.
• Usability testing tries to find faults in the user interface design of the system. Often, systems
fail to accomplish their intended purpose simply because their users are confused by the user
interface and unwillingly introduce erroneous data.
• Unit testing tries to find faults in participating objects and/or subsystems with respect to the
use cases from the use case model.
• Integration testing is the activity of finding faults by testing individual components in
combination. Structural testing is the culmination of integration testing involving all
components of the system. Integration tests and structural tests exploit knowledge from the
SDD (System Design Document) using an integration strategy described in the Test Plan (TP).
• System testing tests all the components
together, seen as a single system to identify
faults with respect to the scenarios from the
problem statement and the requirements and
design goals identified in the analysis and
system design, respectively:
• Functional testing tests the requirements
from the RAD and the user manual.
• Performance testing checks the
nonfunctional requirements and additional
design goals from the SDD. Functional and
performance testing are done by developers.
• Acceptance testing and installation testing
check the system against the project
agreement and is done by the client, if
necessary, with help by the developers.
Testing Concepts
• A test component is a part of the system that can be isolated for testing. A component can be
an object, a group of objects, or one or more subsystems.
• A fault, also called bug or defect, is a design or coding mistake that may cause abnormal
component behavior.
• An erroneous state is a manifestation of a fault during the execution of the system. An
erroneous state is caused by one or more faults and can lead to a failure.
• A failure is a deviation between the specification and the actual behavior. A failure is
triggered by one or more erroneous states. Not all erroneous states trigger a failure.
• A test case is a set of inputs and expected results that exercises a test component with the
purpose of causing failures and detecting faults.
• A test stub is a partial implementation of components on which the tested component
depends.
• A test driver is a partial implementation of a component that depends on the test component.
Test stubs and drivers enable components to be isolated from the rest of the system for testing.
• A correction is a change to a component. The purpose of a correction is to repair a fault. Note
that a correction can introduce new faults
Testing Activities
The technical activities of testing. These include

• Component inspection, which finds faults in an individual component through the


manual inspection of its source code.
• Usability testing, which finds differences between what the system does and the
users’ expectation of what it should do.
• Unit testing, which finds faults by isolating an individual component using test
stubs and drivers and by exercising the component using test cases .
• Integration testing, which finds faults by integrating several components together.
• System testing, which focuses on the complete system, its functional and
nonfunctional requirements, and its target environment.
Component Inspection
• Inspections find faults in a component by reviewing its source code in a formal
meeting.
• Fagan’s inspection method consists of five steps:
• Overview. The author of the component briefly presents the purpose and
scope of the component and the goals of the inspection.
• Preparation. The reviewers become familiar with the implementation of the
component.
• Inspection meeting. A reader paraphrases the source code of the component,
and the inspection team raises issues with the component. A moderator keeps
the meeting on track.
• Rework. The author revises the component.
• Follow-up. The moderator checks the quality of the rework and may
determine the component that needs to be reinspected.
Usability Testing
• Usability testing tests the user’s understanding of the system.
• There are three types of usability tests
• Scenario test. During this test, one or more users are presented with a visionary
scenario of the system.
• Prototype test. During this type of test, the end users are presented with a piece of
software that implements key aspects of the system.
• A vertical prototype completely implements a use case through the system.
• A horizontal prototype implements a single layer in the system.
• Product test. This test is similar to the prototype test except that a functional
version of the system is used in place of the prototype
Unit Testing
• Unit testing focuses on the building blocks of the software system, that is, objects
and subsystems.
• Motivations behind focusing on these building blocks.
• First, unit testing reduces the complexity of overall test activities, allowing us
to focus on smaller units of the system.
• Second, unit testing makes it easier to pinpoint and correct faults, given that
few components are involved in the test.
• Third, unit testing allows parallelism in the testing activities; that is, each
component can be tested independently of the others.
Unit Testing
• Many unit testing techniques have been devised. the most important ones are
Equivalence testing, Boundary testing, Path testing, and State-based testing.
Equivalence testing
• This blackbox testing technique minimizes the number of test cases. The possible inputs
are partitioned into equivalence classes, and a test case is selected for each class.
• Equivalence testing consists of two steps:
• identification of the equivalence classes and
• selection of the test inputs.
The following criteria are used in determining the equivalence classes.
• Coverage. Every possible input belongs to one of the equivalence classes.
• Disjointedness. No input belongs to more than one equivalence class.
• Representation. If the execution demonstrates an erroneous state when a particular
member of a equivalence class is used as input, then the same erroneous state can be
detected by using any other member of the class as input.
Boundary testing
• This special case of equivalence testing focuses on the conditions at the boundary of the
equivalence classes. Rather than selecting any element in the equivalence class,
boundary testing requires that the elements be selected from the “edges” of the
equivalence class.
• The assumption behind boundary testing is that developers often overlook special cases
at the boundary of the equivalence classes.
• A disadvantage of equivalence and boundary testing is that these techniques do not
explore combinations of test input data. In many cases, a program fails because a
combination of certain values causes the erroneous fault.
• Cause-effect testing addresses this problem by establishing logical relationships between
input and outputs or inputs and transformations. The inputs are called causes, the outputs
or transformations are effects. The technique is based on the premise that the
input/output behavior can be transformed into a Boolean function.
Path testing
• This whitebox testing technique identifies faults in the implementation of the
component. The assumption behind path testing is that, by exercising all possible paths
through the code at least once, most faults will trigger failures.
• The identification of paths requires knowledge of the source code and data structures.
The starting point for path testing is the flow graph.
• A flow graph consists of nodes representing executable blocks and edges representing
flow of control. A flow graph is constructed from the code of a component by mapping
decision statements (e.g., if statements, while loops) to nodes. Statements between each
decision (e.g., then block, else block) are mapped to other nodes. Associations between
each node represent the precedence relationships.
• Complete path testing consists of designing test cases such that each edge in the activity
diagram is traversed at least once. This is done by examining the condition associated
with each branch point and selecting an input for the true branch and another input for
the false branch.
State-based testing
• This testing technique was recently developed for object-oriented systems [Turner &
Robson, 1993]. Most testing techniques focus on selecting a number of test inputs for a
given state of the system, exercising a component or a system, and comparing the
observed outputs with an oracle.
• State-based testing, however, compares the resulting state of the system with the
expected state.
• In the context of a class, state-based testing consists of deriving test cases from the UML
state machine diagram for the class. For each state, a representative set of stimuli is
derived for each transition (similar to equivalence testing). The attributes of the class are
then instrumented and tested after each stimuli has been applied to ensure that the class
has reached the specified state.
• Currently, state-based testing presents several difficulties. Because the state of a class is
encapsulated, test cases must include sequences for putting classes in the desired state
before given transitions can be tested.
Integration Testing
• Integration testing detects faults that have not been detected during unit testing by
focusing on small groups of components.
• Two or more components are integrated and tested, and when no new faults are
revealed, additional components are added to the group.
• If two components are tested together, we call this a double test. Testing three
components together is a triple test, and a test with four components is called a
quadruple test.
• There are two types of Integration testing
• horizontal integration testing strategies, in which components are integrated
according to layers.
• vertical integration testing strategies, in which components are integrated
according to functions.
• Horizontal integration testing strategies
• Several approaches have been devised to implement a horizontal integration testing
strategy: big bang testing, bottom-up testing, top-down testing, and sandwich
testing.
• Each of these strategies was originally devised by assuming that the system
decomposition is hierarchical and that each of the components belong to hierarchical
layers ordered with respect to the “Call” association.
• The big bang testing strategy assumes that all components are first tested
individually and then tested together as a single system.
• The bottom-up testing strategy first tests each component of the bottom layer
individually, and then integrates them with components of the next layer up.
• The top-down testing strategy unit tests the components of the top layer first, and
then integrates the components of the next layer down.
• The sandwich testing strategy combines the top-down and bottom-up strategies,
attempting to make use of the best of both
• Vertical integration testing strategies
• Vertical integration testing strategies focus on early integration. For a given use case,
the needed parts of each component, such the user interface, business logic,
middleware, and storage, are identified and developed in parallel and integration
tested.
• A system build with a vertical integration strategy produces release candidates.
• The drawback of vertical integration testing is that the system design is evolved
incrementally, often resulting in reopening major system design decisions.
System Testing
• System testing ensures that the complete system complies with the functional and
nonfunctional requirements.
• During system testing, several activities are performed:
• Functional testing. Test of functional requirements (from RAD) •
Performance testing. Test of nonfunctional requirements (from SDD)
• Pilot testing. Tests of common functionality among a selected group of end
users in the target environment
• Acceptance testing. Usability, functional, and performance tests performed by
the customer in the development environment against acceptance criteria
(from Project Agreement)
• Installation testing. Usability, functional, and performance tests performed by
the customer in the target environment. If the system is only installed at a
small selected set of customers it is called a beta test
Functional testing
• Functional testing, also called requirements testing, finds differences between the
functional requirements and the system.
• Functional testing is a blackbox technique: test cases are derived from the use case
model. In systems with complex functional requirements, it is usually not possible
to test all use cases for all valid and invalid inputs.
• The goal of the tester is to select those tests that are relevant to the user and have a
high probability of uncovering a failure.

Performance testing
• Performance testing finds differences between the design goals selected during
system design and the system. Because the design goals are derived from the
nonfunctional requirements, the test cases can be derived from the SDD or from
the RAD.
• The following tests are performed during performance testing:
• Stress testing checks if the system can respond to many simultaneous requests. For
example, if an information system for car dealers is required to interface with 6000
dealers, the stress test evaluates how the system performs with more than 6000
simultaneous users.
• Volume testing attempts to find faults associated with large amounts of data, such as
static limits imposed by the data structure, or high-complexity algorithms, or high disk
fragmentation.
• Security testing attempts to find security faults in the system. There are few systematic
methods for finding security faults. Usually this test is accomplished by “tiger teams”
who attempt to break into the system, using their experience and knowledge of typical
security flaws.
• Timing testing attempts to find behaviors that violate timing constraints described by
the nonfunctional requirements.
• Recovery tests evaluates the ability of the system to recover from erroneous states,
such as the unavailability of resources, a hardware failure, or a network failure.
• After all the functional and performance tests have been performed, and no failures have
been detected during these tests, the system is said to be validated.
Pilot testing
• During the pilot test, also called the field test, the system is installed and used by a
selected set of users. Users exercise the system as if it had been permanently
installed. No explicit guidelines or test scenarios are given to the users.
• Pilot tests are useful when a system is built without a specific set of requirements
or without a specific customer in mind. In this case, a group of people is invited to
use the system for a limited time and to give their feedback to the developers.
• An alpha test is a pilot test with users exercising the system in the development
environment.
• In a beta test, the pilot test is performed by a limited number of end users in the
target environment
Acceptance testing
• There are three ways the client evaluates a system during acceptance testing.
• In a benchmark test, the client prepares a set of test cases that represent typical
conditions under which the system should operate. Benchmark tests can be performed
with actual users or by a special test team exercising the system functions, but it is
important that the testers be familiar with the functional and nonfunctional
requirements so they can evaluate the system.
• In competitor testing, the new system is tested against an existing system or
competitor product.
• In shadow testing, a form of comparison testing, the new and the legacy systems are
run in parallel, and their outputs are compared.
• After acceptance testing, the client reports to the project manager which
requirements are not satisfied. Acceptance testing also gives the opportunity for a
dialog between the developers and client about conditions that have changed and
which requirements must be added, modified, or deleted because of the changes.
Installation testing
• After the system is accepted, it is installed in the target environment.
• A good system testing plan allows the easy reconfiguration of the system from the
development environment to the target environment. The desired outcome of the
installation test is that the installed system correctly addresses all requirements.
• In most cases, the installation test repeats the test cases executed during function
and performance testing in the target environment. Some requirements cannot be
executed in the development environment because they require target-specific
resources. To test these requirements, additional test cases have to be designed and
performed as part of the installation test.
• Once the customer is satisfied with the results of the installation test, system
testing is complete, and the system is formally delivered and ready for operation.
Managing Testing
Many testing activities occur near the end of the project, when resources are running
low and delivery pressure increases. Often, trade-offs lie between the faults to be
repaired before delivery and those that can be repaired in a subsequent revision of the
system. In the end, however, developers should detect and repair a sufficient number
of faults such that the system meets functional and nonfunctional requirements to an
extent acceptable to the client.

Planning Testing
• Developers can reduce the cost of testing and the elapsed time necessary for its
completion through careful planning.
• Two key elements are to start the selection of test cases early and to parallelize tests.
• Developers responsible for testing can design test cases as soon as the models they
validate become stable.
• The second key element in shortening testing time is to parallelize testing activities.
Documenting Testing
• Testing activities are documented in four types of documents,
• The Test Plan focuses on the managerial aspects of testing. It documents the
scope, approach, resources, and schedule of testing activities. The
requirements and the components to be tested are identified in this document.
• Each test is documented by a Test Case Specification. This document
contains the inputs, drivers, stubs, and expected outputs of the tests, as well as
the tasks to be performed.
• Each execution of each test is documented by a Test Incident Report. The
actual results of the tests and differences from the expected output are
recorded.
• The Test Report Summary document lists all the failures discovered during
the tests that need to be investigated. From the Test Report Summary, the
developers analyze and prioritize each failure and plan for changes in the
system and in the models. These changes in turn can trigger new test cases and
new test executions.
Documenting Testing
• Testing activities are documented in four types of documents,
• The Test Plan focuses on the managerial aspects of testing. It documents the
scope, approach, resources, and schedule of testing activities. The
requirements and the components to be tested are identified in this document.
• Each test is documented by a Test Case Specification. This document
contains the inputs, drivers, stubs, and expected outputs of the tests, as well as
the tasks to be performed.
• Each execution of each test is documented by a Test Incident Report. The
actual results of the tests and differences from the expected output are
recorded.
• The Test Report Summary document lists all the failures discovered during
the tests that need to be investigated. From the Test Report Summary, the
developers analyze and prioritize each failure and plan for changes in the
system and in the models. These changes in turn can trigger new test cases and
new test executions.
The Test Plan (TP) and the Test Case Specifications (TCS) are written early in the process, as soon as the
test planning and each test case are completed. These documents are under configuration management
and updated as the system models change.

The Test Incident Report lists the actual test results and the failures that were experienced. The
description of the results must include which features were demonstrated and whether the features have
been met. If a failure has been experienced, the test incident report should contain sufficient information
to allow the failure to be reproduced.
Failures from all Test Incident Reports are collected and listed in the Test Summary Report and then
further analyzed and prioritized by the developers.
Assigning Responsibilities
• Testing requires developers to find faults in components of the system. This is best
done when the testing is performed by a developer who was not involved in the
development of the component under test, one who is less reticent to break the
component being tested and who is more likely to find ambiguities in the
component specification.
• For stringent quality requirements, a separate team dedicated to quality control is
solely responsible for testing. The testing team is provided with the system
models, the source code, and the system for developing and executing test cases.
Test Incident Reports and Test Report Summaries are then sent back to the
subsystem teams for analysis and possible revision of the system.
• The revised system is then retested by the testing team, not only to check if the
original failures have been addressed, but also to ensure that no new faults have
been inserted in the system.
Regression Testing
• The changes can exercise different assumptions about the unchanged components,
leading to erroneous states. Integration tests that are rerun on the system to
produce such failures are called regression tests.
• The most robust and straightforward technique for regression testing is to accumulate
all integration tests and rerun them whenever new components are integrated into the
system. This requires developers to keep all tests up-to-date, to evolve them as the
subsystem interfaces change, and to add new integration tests as new services or new
subsystems are added.
• As regression testing can become time consuming, different techniques have been
developed for selecting specific regression tests.
• Such techniques include.
 Retest dependent components
 Retest risky use cases.
 Retest frequent use cases
Automating Testing

 Manual testing involves a tester to feed predefined inputs into the system using
the user interface, a command line console, or a debugger.
 The tester then compares the outputs generated by the system with the expected
oracle.
 The repeatability of test execution can be achieved with automation.
 The benefit of automating test execution is that tests are repeatable
Documenting Architecture: Architectural views
• Logical view: A high-level representation of a system's functionality and how its components
interact. It's typically represented using UML diagrams such as class diagrams, sequence
diagrams, and activity diagrams.
• Deployment view: Shows the distribution of processing across a set of nodes in the system,
including the physical distribution of processes and threads. This view focuses on aspects of the
system that are important after the system has been tested and is ready to go into live operation.
• Cloud security architecture: Describes the structures that protect the data, workloads,
containers, virtual machines and APIs within the cloud environment.
• Data architecture view: Addresses the concerns of database designers and database
administrators, and system engineers responsible for developing and integrating the various
database components of the system. Modern data architecture views data as a shared asset and
does not allow departmental data silos.
• Behavioral architecture view: An approach in architectural design and spatial planning that
takes into consideration how humans interact with their physical environment from a behavioral
perspective. A behavioral architecture model is an arrangement of functions and their sub-
functions as well as interfaces (inputs and outputs).

You might also like