OOSE Unit-5
OOSE Unit-5
In each of these transformations, small errors usually creep in, resulting in bugs and
test failures. A set of techniques / transformations can reduce the number of such
errors.
An Overview of Mapping
• Mappings are transformations that aim at improving one aspect of the model
while preserving functionality.
• These transformations occur during numerous object design and implementation
activities.
• Activities:
• optimizing the class model
• mapping associations to collections
• mapping operation contracts to exceptions
• mapping the class model to a storage schema.
Mapping Concepts
We distinguish four types of transformations
• Model transformations operate on object models.
An example is the conversion of a simple attribute (e.g., an address represented as a string) to a
class (e.g., a class with street address, zip code, city, state, and country attributes).
• Refactoring are transformations that operate on source code. They are similar to object model
transformations in that they improve a single aspect of the system without changing its
functionality.
• Forward engineering produces a source code template that corresponds to an object model .
• Reverse engineering produces a model that corresponds to source code. This transformation is
used when the design of the system has been lost and must be recovered from the source code
Model Transformation
• A model transformation is applied to an
object model and results in another object
model.
• The purpose of object model transformation
is to simplify or optimize the original model,
bringing it into closer compliance with all
requirements in the specification.
• A transformation may add, remove, or
rename classes, operations, associations, or
attributes.
• A transformation can also add information to
the model or remove information from it.
Refactoring
• A refactoring is a transformation of the source code that improves its readability or
modifiability without changing the behavior of the system.
• Refactoring aims at improving the design of a working system by focusing on a
specific field or method of a class.
• To ensure that the refactoring does not change the behavior of the system, the
refactoring is done in small incremental steps that are interleaved with tests.
• For example, the object model transformation of Figure 10-2 corresponds to a
sequence of three refactoring’s.
• The first one, Pull Up Field, moves the email field from the subclasses to the superclass
User.
• The second one, Pull Up Constructor Body, moves the initialization code from the
subclasses to the superclass.
• The third and final one, Pull Up Method, moves the methods manipulating the email
field from the subclasses to the superclass
Forward Engineering
Forward engineering is applied to a
set of model elements and results in
a set of corresponding source code
statements, such as a class
declaration, a Java expression, or a
database schema.
The purpose of forward engineering
is to maintain a strong
correspondence between the object
design model and the code, and to
reduce the number of errors
introduced during implementation,
thereby decreasing implementation
effort.
Reverse Engineering
• Reverse engineering is applied to a set of source code elements and results in a set
of model elements.
• The purpose of this type of transformation is to recreate the model for an existing
system, either because the model was lost or never created, or because it became
out of sync with the source code.
• Reverse engineering is essentially an inverse transformation of forward
engineering. Reverse engineering creates a UML class for each class declaration
statement, adds an attribute for each field, and adds an operation for each method
Transformation Principles
• A transformation aims at improving the design of the system with respect to some
criterion.
• To avoid introducing new errors, all transformations should follow these
principles:
• Each transformation must address a single criteria.
• Each transformation must be local.
• Each transformation must be applied in isolation to other changes.
• Each transformation must be followed by a validation step.
Mapping Activities
• Optimizing the Object Design Model
• Mapping Associations to Collections
• Mapping Contracts to Exceptions
• Mapping Object Models to a Persistent Storage Schema
Optimizing the Object Design Model
• Some methods are called many times, but their results are based on values that do
not change or change only infrequently.
• Reducing the number of computations required by these methods substantially
improve overall response time.
• In such cases, the result of the computation should be cached as a private attribute.
Mapping Associations to Collections
Performance testing
• Performance testing finds differences between the design goals selected during
system design and the system. Because the design goals are derived from the
nonfunctional requirements, the test cases can be derived from the SDD or from
the RAD.
• The following tests are performed during performance testing:
• Stress testing checks if the system can respond to many simultaneous requests. For
example, if an information system for car dealers is required to interface with 6000
dealers, the stress test evaluates how the system performs with more than 6000
simultaneous users.
• Volume testing attempts to find faults associated with large amounts of data, such as
static limits imposed by the data structure, or high-complexity algorithms, or high disk
fragmentation.
• Security testing attempts to find security faults in the system. There are few systematic
methods for finding security faults. Usually this test is accomplished by “tiger teams”
who attempt to break into the system, using their experience and knowledge of typical
security flaws.
• Timing testing attempts to find behaviors that violate timing constraints described by
the nonfunctional requirements.
• Recovery tests evaluates the ability of the system to recover from erroneous states,
such as the unavailability of resources, a hardware failure, or a network failure.
• After all the functional and performance tests have been performed, and no failures have
been detected during these tests, the system is said to be validated.
Pilot testing
• During the pilot test, also called the field test, the system is installed and used by a
selected set of users. Users exercise the system as if it had been permanently
installed. No explicit guidelines or test scenarios are given to the users.
• Pilot tests are useful when a system is built without a specific set of requirements
or without a specific customer in mind. In this case, a group of people is invited to
use the system for a limited time and to give their feedback to the developers.
• An alpha test is a pilot test with users exercising the system in the development
environment.
• In a beta test, the pilot test is performed by a limited number of end users in the
target environment
Acceptance testing
• There are three ways the client evaluates a system during acceptance testing.
• In a benchmark test, the client prepares a set of test cases that represent typical
conditions under which the system should operate. Benchmark tests can be performed
with actual users or by a special test team exercising the system functions, but it is
important that the testers be familiar with the functional and nonfunctional
requirements so they can evaluate the system.
• In competitor testing, the new system is tested against an existing system or
competitor product.
• In shadow testing, a form of comparison testing, the new and the legacy systems are
run in parallel, and their outputs are compared.
• After acceptance testing, the client reports to the project manager which
requirements are not satisfied. Acceptance testing also gives the opportunity for a
dialog between the developers and client about conditions that have changed and
which requirements must be added, modified, or deleted because of the changes.
Installation testing
• After the system is accepted, it is installed in the target environment.
• A good system testing plan allows the easy reconfiguration of the system from the
development environment to the target environment. The desired outcome of the
installation test is that the installed system correctly addresses all requirements.
• In most cases, the installation test repeats the test cases executed during function
and performance testing in the target environment. Some requirements cannot be
executed in the development environment because they require target-specific
resources. To test these requirements, additional test cases have to be designed and
performed as part of the installation test.
• Once the customer is satisfied with the results of the installation test, system
testing is complete, and the system is formally delivered and ready for operation.
Managing Testing
Many testing activities occur near the end of the project, when resources are running
low and delivery pressure increases. Often, trade-offs lie between the faults to be
repaired before delivery and those that can be repaired in a subsequent revision of the
system. In the end, however, developers should detect and repair a sufficient number
of faults such that the system meets functional and nonfunctional requirements to an
extent acceptable to the client.
Planning Testing
• Developers can reduce the cost of testing and the elapsed time necessary for its
completion through careful planning.
• Two key elements are to start the selection of test cases early and to parallelize tests.
• Developers responsible for testing can design test cases as soon as the models they
validate become stable.
• The second key element in shortening testing time is to parallelize testing activities.
Documenting Testing
• Testing activities are documented in four types of documents,
• The Test Plan focuses on the managerial aspects of testing. It documents the
scope, approach, resources, and schedule of testing activities. The
requirements and the components to be tested are identified in this document.
• Each test is documented by a Test Case Specification. This document
contains the inputs, drivers, stubs, and expected outputs of the tests, as well as
the tasks to be performed.
• Each execution of each test is documented by a Test Incident Report. The
actual results of the tests and differences from the expected output are
recorded.
• The Test Report Summary document lists all the failures discovered during
the tests that need to be investigated. From the Test Report Summary, the
developers analyze and prioritize each failure and plan for changes in the
system and in the models. These changes in turn can trigger new test cases and
new test executions.
Documenting Testing
• Testing activities are documented in four types of documents,
• The Test Plan focuses on the managerial aspects of testing. It documents the
scope, approach, resources, and schedule of testing activities. The
requirements and the components to be tested are identified in this document.
• Each test is documented by a Test Case Specification. This document
contains the inputs, drivers, stubs, and expected outputs of the tests, as well as
the tasks to be performed.
• Each execution of each test is documented by a Test Incident Report. The
actual results of the tests and differences from the expected output are
recorded.
• The Test Report Summary document lists all the failures discovered during
the tests that need to be investigated. From the Test Report Summary, the
developers analyze and prioritize each failure and plan for changes in the
system and in the models. These changes in turn can trigger new test cases and
new test executions.
The Test Plan (TP) and the Test Case Specifications (TCS) are written early in the process, as soon as the
test planning and each test case are completed. These documents are under configuration management
and updated as the system models change.
The Test Incident Report lists the actual test results and the failures that were experienced. The
description of the results must include which features were demonstrated and whether the features have
been met. If a failure has been experienced, the test incident report should contain sufficient information
to allow the failure to be reproduced.
Failures from all Test Incident Reports are collected and listed in the Test Summary Report and then
further analyzed and prioritized by the developers.
Assigning Responsibilities
• Testing requires developers to find faults in components of the system. This is best
done when the testing is performed by a developer who was not involved in the
development of the component under test, one who is less reticent to break the
component being tested and who is more likely to find ambiguities in the
component specification.
• For stringent quality requirements, a separate team dedicated to quality control is
solely responsible for testing. The testing team is provided with the system
models, the source code, and the system for developing and executing test cases.
Test Incident Reports and Test Report Summaries are then sent back to the
subsystem teams for analysis and possible revision of the system.
• The revised system is then retested by the testing team, not only to check if the
original failures have been addressed, but also to ensure that no new faults have
been inserted in the system.
Regression Testing
• The changes can exercise different assumptions about the unchanged components,
leading to erroneous states. Integration tests that are rerun on the system to
produce such failures are called regression tests.
• The most robust and straightforward technique for regression testing is to accumulate
all integration tests and rerun them whenever new components are integrated into the
system. This requires developers to keep all tests up-to-date, to evolve them as the
subsystem interfaces change, and to add new integration tests as new services or new
subsystems are added.
• As regression testing can become time consuming, different techniques have been
developed for selecting specific regression tests.
• Such techniques include.
Retest dependent components
Retest risky use cases.
Retest frequent use cases
Automating Testing
Manual testing involves a tester to feed predefined inputs into the system using
the user interface, a command line console, or a debugger.
The tester then compares the outputs generated by the system with the expected
oracle.
The repeatability of test execution can be achieved with automation.
The benefit of automating test execution is that tests are repeatable
Documenting Architecture: Architectural views
• Logical view: A high-level representation of a system's functionality and how its components
interact. It's typically represented using UML diagrams such as class diagrams, sequence
diagrams, and activity diagrams.
• Deployment view: Shows the distribution of processing across a set of nodes in the system,
including the physical distribution of processes and threads. This view focuses on aspects of the
system that are important after the system has been tested and is ready to go into live operation.
• Cloud security architecture: Describes the structures that protect the data, workloads,
containers, virtual machines and APIs within the cloud environment.
• Data architecture view: Addresses the concerns of database designers and database
administrators, and system engineers responsible for developing and integrating the various
database components of the system. Modern data architecture views data as a shared asset and
does not allow departmental data silos.
• Behavioral architecture view: An approach in architectural design and spatial planning that
takes into consideration how humans interact with their physical environment from a behavioral
perspective. A behavioral architecture model is an arrangement of functions and their sub-
functions as well as interfaces (inputs and outputs).