0% found this document useful (0 votes)
112 views51 pages

Scsa1401 - Ooase - Unit 4

The document discusses object-oriented design principles including the design process, design axioms, and corollaries. It explains characteristics of object-oriented design such as encapsulation and message passing between objects. It also describes modeling class relationships and interactions using UML diagrams.

Uploaded by

G.Akshaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views51 pages

Scsa1401 - Ooase - Unit 4

The document discusses object-oriented design principles including the design process, design axioms, and corollaries. It explains characteristics of object-oriented design such as encapsulation and message passing between objects. It also describes modeling class relationships and interactions using UML diagrams.

Uploaded by

G.Akshaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

SCHOOL OF COMPUTING

DEPARTMENT OF COMPUTER SCIENCE AND


ENGINEERING

UNIT – IV - OBJECT ORIENTED ANALYSIS AND SYSTEM ENGINEERING - SCSA1401


UNIT 4
OBJECT ORIENTED DESIGN
Object Oriented Design Process - Object Oriented Design Axioms - Corollaries -
Designing Classes: Object Constraint Language - Process of Designing Class -
Class Visibility - Refining Attributes - Access Layer: Object Store and Persistence -
Database Management System - Logical and Physical Database Organization and
Access Control - Distributed Databases and Client Server Computing - Object
Oriented Database Management System – Object Relational Systems – Designing
Access Layer Classes - View Layer: Designing View Layer Classes - Macro Level
Process - Micro Level Process – Purpose of View Layer Interface - Prototyping the
user interface.

The Object Oriented Design Process and Design Axioms

Designing systems using self-contained objects and object classes


 To explain how a software design may be represented as a set of interacting
objects that manage their own state and operations

 To describe the activities in the object-oriented design process

 To introduce various models that describe an object-oriented design

 To show how the UML may be used to represent these models

Characteristics of OOD
 Objects are abstractions of real-world or system entities and manage themselves

 Objects are independent and encapsulate state and representation information.

 System functionality is expressed in terms of object services

 Shared data areas are eliminated. Objects communicate by message passing

 Objects may be distributed and may execute sequentially or in parallel

THE OBJECT ORIENTED DESIGN PROCESS

During the design phase the Classes in o-o- analysis must be revisited with a shift
in focus to their implementation. New Classes or attributes and methods must be added
for implementation purposes and their user interfaces.
The o-o design process consists of the following activities (Fig 4.1):

1. Apply design axioms to design classes, their attributes, methods, associations,


structures and protocols.
Refine and complete the static UML Class diagram by adding details
to the uml class diagram. This step consists of the following activities.
Refine attributes.
Design methods and protocols by utilizing a UML Activity
diagram to represent the method’s algorithm.
Refine associations between classes (if required)
Refine class hierarchy and design with inheritance (if required)
Iterate and refine again.
2. Design the access layer
Create mirror classes. For every business class identified and created,
Create one access class.
Identify access layer class relationships.
Simplify Classes and the relationships. – The main goal is to eliminate
redundant classes and structures.
Redundant Classes: Do not keep 2 classes that perform
similar translate request and translate results activities. Simply select one and
eliminate the other.
Method Classes: Revisit the classes that consist of only
one or two methods to see if they can be eliminated or combine with existing
classes.
Iterate and refine again
3. Design the view layer classes.
Design the macro level user interface, identifying view layer objects.
Design the micro level user interface, which includes these activities:
Design the view layer objects by applying the design
axioms and corollaries.
Build a prototype of the view layer interface.
Test usability and user satisfaction
Iterate and refine.
4. Iterate and refine the whole design. Reapply the design axioms and if needed,
repeat the proceeding steps.

Fig 4.1 The object-oriented design process in the unified


approach.
OBJECT-ORIENTED DESIGN AXIOMS

 Main focus of the analysis phase of SW development  “what needs to be


done”?
 Objects discovered during analysis serve as the framework for design.
 Class‘s attributes, methods, and associations identified during analysis must be
designed for implementation as a data type expressed in the implementation
language.
 During the design phase, we elevate the model into logical entities, some of which
might relate more to the computer domain (such as user interface, or the access
layer) than the real world or the physical domain (such as people or employees).
Start thinking how to actually implement the problem in a program.
 The goal  to design the classes that we need to implement the system.
 Design is about producing a solution that meets the requirements that have been
specified during analysis.
 Analysis Versus Design.

 An axiom = is a fundamental truth that always is observed to be valid and for


which there is no counterexample or exception.

 A theorem = is a proposition that may not be self-evident but can be proven from
accepted axioms. Therefore, is equivalent to a law or principle?
 A theorem is valid if its referent axioms & deductive steps are valid.
 A corollary = is a proposition that follows from an axiom or another proposition
that has been proven.
 Suh‘s design axioms to OOD :
 Axiom 1 : The independence axiom. Maintain the independence of
components
 Axiom 2 : The information axiom. Minimize the information content of
the design.
 Axiom 1  states that, during the design process, as we go from requirement and
use-case to a system component, each component must satisfy that requirement,
without affecting other requirements
 Axiom 2  concerned with simplicity. Rely on a general rule known as Occam’s
razor.
 Occam’s razor rule of simplicity in OO terms :
The best designs usually involve the least complex code but not necessarily
the fewest number of classes or methods. Minimizing complexity should be the
goal, because that produces the most easily maintained and enhanced application.
In an object-oriented system, the best way to minimize complexity is to use
inheritance and the system’s built-in classes and to add as little as possible to what
already is there.

COROLLARIES

From the two design axioms, many corollaries may be derived as a direct
consequence of the axioms. These corollaries may be more useful in making specific
design decisions, since they can be applied to actual situations more easily than the
original axioms. They even may be called design rules, and all are derived from the two
basic axioms (see Figure 4.2 ):

Fig 4.2 The origin of corollaries. Corollaries 1, 2, and 3 are from both axioms,
whereasCorollary 4 is from axiom 1 and corollaries 5 and 6 are from
axiom 2.
 Corollary 1. Uncoupled design with less information content. Highly cohesive
objects can improve coupling because only a minimal amount of essential
information need be passed between objects.
 Corollary 2. Single purpose. Each class must have a single, clearly defined
purpose. When you document, you should be able to easily describe the purpose
of a class in a few sentences.
 Corollary 3. Large number of simple classes. Keeping the classes simple allows
reusability.
 Corollary 4. Strong mapping. There must be a strong association between the
physical system (analysis's object) and logical design (design's object).
 Corollary 5. Standardization. Promote standardization by designing
interchangeable components and reusing existing classes or components.
 Corollary 6. Design with inheritance. Common behavior (methods) must be
moved to super classes. The super class-subclass structure must make logical
sense.

Corollary 1. Uncoupled Design with Less Information Content

 Highly cohesive objects can improve coupling because only a minimal amount of
essential information need be passed between objects.
 The main goal is to maximize objects cohesiveness among objects and
software components in order to improve coupling because only a minimal
amount of essential information need be passed between components.

Coupling

 Coupling is a measure of the strength of association established by a


connection from one object or software component to another.
 Coupling is a binary relationship: A is coupled with B. Coupling is
important when evaluating a design because it helps us focus on an
important issue in design. For example, a change to one component of a system
should have a minimal impact on other components.
 Strong coupling among objects complicates a system, since the class is harder
to understand or highly interrelated with other classes. The degree of coupling is
a function of
1. How complicated the connection is.
2. Whether the connection refers to the object itself or something inside it.
3. What is being sent or received.

 The degree, or strength, of coupling between two components is measured by the


amount and complexity of information transmitted between them.
 Coupling increases ( becomes stronger) with increasing complexity or
obscurity of the interface.
 Coupling decreases ( becomes lower) when the connection is to the component
interface rather than to an internal component.
 Coupling also is lower for data connections than for control connections.
 Object-oriented design has two types of coupling: interaction coupling and
inheritance coupling.

Interaction coupling  the amount & complexity of messages between


components.
 Desirable to have a little interaction.
 Coupling also applies to the complexity of the message.
 The general guideline is to keep the messages as simple and
infrequent as possible.
 Minimize the number of messages sent & received by an object
 Types of coupling among objects or components , refer

. In general, if a message connection involves more than three parameters (e.g., in


Method (X, Y, Z), the X, Y, and Z are parameters), examine it to see if it can be
simplified. It has been documented that objects connected to many very complex
messages are tightly coupled, meaning any change to one invariability leads to a ripple
effect of changes in others (see Figure 4.3 ).

Fig 4.3 E is a tightly coupled object.

In addition to minimizing the complexity of message connections, also reduce the


number of messages sent and received by an object . Table 9-1 contains different types of
interaction couplings.
 Inheritance coupling -> coupling between super-and subclasses
 A subclass is coupled to its superclass in terms of attributes & methods
 High inheritance coupling is desirable
 Each specialization class should not inherit lots of unrelated & unneeded
methods & attributes.

Cohesion
 Cohesion deals with interaction within a single object or software component.
 Need to consider interaction within a single object or sw component  Cohesion
 Cohesion  reflects the ‘single-purposeness‘ of an object ( see corollaries 2
&3)
 Highly cohesive components can lower coupling because only a minimum of
essential information need be passed between components.
 Cohesion also helps in designing classes that have very specific goals and
clearly defined purposes.
 Method cohesion  a method should carry only one function.
 A method carries multiple functions is undesirable.
 Class cohesion means that all the class's methods and attributes must be highly
cohesive, meaning to be used by internal methods or derived classes' methods.
 Inheritance cohesion is concerned with the following questions: 1.How
interrelated are the classes? 2.Does specialization really portray specialization or
is it just something arbitrary? See Corollary 6, which also addresses these
questions.
Corollary 2. Single Purpose
Each class must have a purpose. Every class should be clearly defined and
necessary in the context of achieving the system's goals.
When you document a class, you should be able to easily explain its purpose in a
sentence or two.
If you cannot, then rethink the class and try to subdivide it into more independent
pieces. In summary, keep it simple; to be more precise, each method must provide only
one service.
Each method should be of moderate size, no more than a page; half a page is
better.

Corollary 3. Large Number of Simpler Classes, Reusability

 Keeping the classes simple allows reusability


 A class that easily can be understood and reused (or inherited) contributes to the
overall system
 Complex & poorly designed class usually cannot be reused
 Guideline  The smaller are your classes, the better are your chances of reusing
them in other projects. Large & complex classes are too specialized to be reused
 The emphasis OOD places on encapsulation, modularization, and polymorphism
suggests reuse rather than building anew
 Primary benefit of sw reusability Higher productivity
Coad and Yourdon describe four reasons why people are not utilizing this
concept:
1. Software engineering textbooks teach new practitioners to build systems from "first
principles"; reusability is not promoted or even discussed.
2. The "not invented here" syndrome and the intellectual challenge of solving an
interesting software problem in one's own unique way mitigates against reusing someone
else’s software component.
3. Unsuccessful experiences with software reusability in the past have convinced many
practitioners and development managers that the concept is not practical.
4. Most organizations provide no reward for reusability; sometimes productivity is
measured in terms of new lines of code written plus a discounted credit (e.g., 50 percent
less credit) for reused lines of code.

Corollary 4. Strong Mapping

 Object-oriented analysis and object-oriented design are based on the same model.
 As the model progresses from analysis to implementation, more detail is added, but
it remains essentially the same. For example, during analysis we might identify a
class Employee.
 During the design phase, we need to design this class design its methods, its
association with other objects, and its view and access classes.
 A strong mapping links classes identified during analysis and classes designed
during the design phase (e.g., view and access classes).

Corollary 5. Standardization

To reuse classes, you must have a good understanding of the classes in the object
oriented programming environment you are using. Most object-oriented systems, such as
Smalltalk, Java, C+ +, or PowerBuilder, come with several built-in class libraries.
Similarly, object-oriented systems are like organic systems, meaning that they grow
as you create new applications.
The knowledge of existing classes will help you determine what new classes
are needed to accomplish the tasks and where you might inherit useful behavior
rather than reinvent the wheel. However, class libraries are not always well
documented or, worse yet, they are documented but not up to date.
The concept of design patterns might provide a way to capture the design
knowledge, document it, and store it in a repository that can be shared and reused
in different applications.

Corollary 6. Designing with Inheritance

When you implement a class, you have to determine its ancestor, what attributes it
will have, and what messages it will understand. Then, you have to construct its methods
and protocols. Ideally, you will choose inheritance to minimize the amount of program
instructions. Satisfying these constraints sometimes means that a class inherits from a
superclass that may not be obvious at first glance.
For example, say, you are developing an application for the government that
manages the licensing procedure for a variety of regulated entities. To simplify the
example, focus on just two types of entities: motor vehicles and restaurants. Therefore,
identifying classes is straightforward. All goes well as you begin to model these two
portions of class hierarchy. Assuming that the system has no existing classes similar to a
restaurant or a motor vehicle, you develop two classes, MotorVehicle and Restaurant.
Subclasses of the MotorVehicle class are Private Vehicle and
CommercialVehicleo These are further subdivided into whatever level of specificity
seems appropriate (see Figure 4.4 ).

Fig 4.4 The initial single inheritance design.


Subclasses of Restaurant are designed to reflect their own licensing procedures.
This is a simple, easy to understand design,
In any case, the design is approved, implementation is accomplished, and the
system goes into production.

You know you need to redesign the application-but redesign how? The answer
depends greatly on the inheritance mechanisms supported by the system's target
language. If the language supports single inheritance exclusively, the choices are
somewhat limited. You can choose to define a formal super class to both MotorVehicle
and Restaurant, License, and move common methods and attributes from both classes
into this License class (see Figure4.5 ).

Fig 4.5 The single inheritance design modified to allow licensing food trucks.
Achieving Multiple Inheritance in a Single Inheritance System

Single inheritance means that each class has only a single superclass. This
technique is used in Smalltalk and several other object-oriented systems. One result of
using a single inheritance hierarchy is the absence of ambiguity as to how an object
will respond to a given method; you simply trace up the class tree beginning with
the object's class, looking for a method of the same name.
However, languages like LISP or C++ have a multiple inheritance scheme
whereby objects can inherit behavior from unrelated areas of the class tree. This could be
desirable when you want a new class to behave similar to more than one existing class.
However, multiple inheritance brings with it some complications, such as how to
determine which behavior to get from which class, particularly when several ancestors
define the same method. It also is more difficult to understand programs written in a
multiple inheritance system.
One way of achieving the benefits of multiple inheritance in a language with
single inheritance is to inherit from the most appropriate class and add an object of
another class as an attribute or aggregation. Therefore, as class designer, you have two
ways to borrow existing functionality in a class. One is to inherit it, and the other is to use
the instance of the class (object) as an attribute. This approach is described in the next
section.
Avoiding Inheriting Inappropriate Behaviors

Beginners in an object oriented system frequently err by designing subclasses that


inherit from inappropriate superclasses. Before a class inherits, ask the following
questions:
 Is the subclass fundamentally similar to its superclass (high inheritance
coupling)?
 .Is it an entirely new thing that simply wants to borrow some expertise
from its superclass (low inheritance coupling)?
Often you will find that the latter is true, and if so, you should add an attribute that
incorporates the proposed superclass's behavior rather than an inheritance from the
superclass.

DESIGN PATTERNS

 Provides a scheme for refining the subsystems or components of a sw system or


the relationships among them
 Are devices that allow systems to share knowledge about their design, by
describing commonly recurring structures of communicating components that
solve a general design problem within a particular context
 The main idea  to provide documentation to help categorize & communicate
about solutions to recurring problems
 The pattern has a name to facilitate discussion and the information it represents
.

For example , refer the book . page no 212.


Designing Classes
Object-oriented design requires taking the objects identified during object-
oriented analysis and designing classes to represent them. As a class designer, you have
to know the specifics of the class you are designing and be aware of how that class
interacts with other classes. Once you have identified your classes and their interactions,
you are ready to design classes.
Underlying the functionality of any application is the quality of its design.
Objectives To explain how a software design may be represented as a set of
interacting objects that manage their own state and operations To describe the activities
in the object-oriented design process To introduce various models that describe an object-
oriented design To show how the UML may be used to represent these models
Characteristics of OOD :
 Characteristics of OOD Objects are abstractions of real-world or system
entities and manage themselves Objects are independent and encapsulate state and
representation information.
 System functionality is expressed in terms of object services Shared data
areas are eliminated.
 Objects communicate by message passing Objects may be distributed and
may execute sequentially or in parallel
Advantages of OOD :
 Advantages of OOD Easier maintenance.
 Objects may be understood as stand-alone entities Objects are appropriate
reusable components For some systems, there may be an obvious mapping from
real world entities to system objects

Object-oriented development :
Object-oriented development Object-oriented analysis, design and programming
are related but distinct OOA is concerned with developing an object model of the
application domain OOD is concerned with developing an object-oriented system model
to implement requirements OOP is concerned with realising an OOD using an OO
programming language such as Java or C++

Objects and object classes :


Objects and object classes Objects are entities in a software system which
represent instances of real-world and system entities Object classes are templates for
objects. They may be used to create objects Object classes may inherit attributes and
services from other object classes
Objects :
Objects An object is an entity which has a state and a defined set of operations
which operate on that state. The state is represented as a set of object attributes. The
operations associated with the object provide services to other objects (clients) which
request these services when some computation is required. Objects are created according
to some object class definition. An object class definition serves as a template for objects.
It includes declarations of all the attributes and services which should be associated with
an object of that class.
UML OBJECT CONSTRAINT LANGUAGE

The UML is a graphical language with a set of rules and semantics. The rules and
semantics of the UML are expressed in English, a form known as OBJECT
CONSTRAING LANGUAGE. OCL is a specification language that uses simple logic
for specifying the properties of a system.
Many UML modeling constructs require expression: For eg; there are expressions
for types, Boolean values and numbers.
Expressions are stated as strings in ocl. The syntax for some common
navigational expressions is shown here. These forms ca be chained together. The
Leftmost element must be an expression for an object or a set of objects. The expressions
are meant to work on sets of values when applicable.

 Item-Selector : - The selector is the name of an attribute in the item. The


result is the value of the attribute. For eg: John.age( the age is attribute of
the object john, and john.age represents the value of the attribute).
 Item-selector [qualifier-value]. The selector indicates a qualified
association that qualifies the item. The result is the related object selected
by an qualifier. For eg; John.phone[3], assuming john has several phone.
 Set -> select (Boolean-expression). The Boolean expression is written in
terms of objects within the set. The result is the subset of objects in the set
for which the Boolean expression is true.
For eg; company.employee->salary->50000. This represents employees
with salaries over $50,000.

DESINGING CLASSES : THE PROCESS

In this section we concentrate on step 1 of the design process described in chap 9,


which consists of the following activities:

1. Apply design axioms to design classes, their attributes, methods, associations,


structures and protocols.
Refine and complete the static UML Class diagram by adding details
to the uml class diagram. This step consists of the following activities.
Refine attributes.
Design methods and protocols by utilizing a UML Activity
diagram to represent the method’s algorithm.
Refine associations between classes (if required)
Refine class hierarchy and design with inheritance (if required)
Iterate and refine again.

O-O design is an iterative process. After all, design is as much about discovery as
construction.
CLASS VISIBILITY: DESIGNING WELL-DEFINED PUBLIC, PRIVATE,
AND PROTECTED PROTOCOLS

In designing methods or attributes for classes, you are confronted with two
problems. One is the protocol, or interface to the class operations and its visibility; and
the other is how it is implemented.
Often the two have very little to do with each other. For example, you might have
a class Bag for collecting various objects that counts multiple occurrences of its elements.
One implementation decision might be that the Bag class uses another class, say,
Dictionary (assuming that we have a class Dictionary), to actually hold its elements. Bags
and dictionaries have very little in common, so this may seem curious to the outside
world. Implementation, by definition, is hidden and off limits to other objects.
The class's protocol, or the messages that a class understands, on the other hand,
can be hidden from other objects (private protocol) or made available to other objects
(public protocol).
Public protocols define the functionality and external messages of an object;
private protocols define the implementation of an object.

A class also might have a set of methods that it uses only internally, messages to
itself. This, the private protocol (visibility) of the class, includes messages that normally
should not be sent from other objects; it is accessible only to operations of that class. In
private protocol, only the class itself can use the method.
The public protocol (visibility) defines the stated behavior of the class as a citizen
in a population and is important information for users as well as future descendants, so it
is accessible to all classes.
If the methods or attributes can be used by the class itself or its subclasses, a
protected protocol can be used.
In a protected protocol (visibility), subclasses the can use the method in addition
to the class itself.
Lack of a well-designed protocol can manifest itself as encapsulation leakage. The
problem of encapsulation leakage occurs when details about a class's internal
implementation are disclosed through the interface. As more internal details become
visible, the flexibility to make changes in the future decreases. If an implementation is
completely open, almost no flexibility is retained for future carefully controlled.
However, do not make such a decision lightly because that could impact the flexibility
and therefore the quality of the design.

PRIVATE AND PROTECTED PROTOCOL LAYERS: INTERNAL

Items in these layers define the implementation of the object. Apply the design
axioms and corollaries, especially Corollary 1 (uncoupled design with less information
content, see Chapter 9) to decide what should be private: what attributes (instance
variables)? What methods? Remember, highly cohesive objects can improve coupling
because only a minimal amount of essential information need be passed between objects.
PUBLIC PROTOCOL LAYER: EXTERNAL

Items in this layer define the functionality of the object. Here are some things to
keep in mind when designing class protocols:
*. Good design allows for polymorphism.
*. Not all protocol should be public; again apply design axioms and corollaries

The following key questions must be answered:


 What are the class interfaces and protocols?
 What public (external) protocol will be used or what external messages
must the system understand?
 What private or protected (internal) protocol will be used or what internal
messages or messages from a subclass must the system understand?

DESIGNING CLASSES: REFINING ATTRIBUTES

Attributes identified in object-oriented analysis must be refined with an eye on


implementation during this phase. In the analysis phase, the name of the attribute was
sufficient. However, in the design phase, detailed information must be added to the model
(especially, that defining the class attributes and operations).
The main goal of this activity is to refine existing attributes (identified in analysis)
or add attributes that can elevate the system into implementation.

Attribute Types

The three basic types of attributes are


1. Single-value attributes.
2. Multiplicity or multivalue attributes.
3. Reference to another object, or instance connection.

Attributes represent the state of an object. When the state of the object changes,
these changes are reflected in the value of attributes. The single-value attribute is the
most common attribute type. It has only one value or state. For example, attributes such
as name, address, or salary are of the single-value type.

The multiplicity or multivalue attribute is the opposite of the single-value attribute


since, as its name implies, it can have a collection of many values at any point in time.
For example, if we want to keep track of the names of people who have called a customer
support line for help, we must use the multivalues attributes.

Instance connection attributes are required to provide the mapping needed by an


object to fulfill its responsibilities, in other words, instance connection model association.
For example, a person might have one or more bank accounts. A person has zero to many
instance connections to Account{s). Similarly, an Account can be assigned to one or
more persbns (i.e., joint account). Therefore, an Account also has zero to many instance
connections to Person{s).
UML Attribute Presentation

OCL can be used during the design phase to define the class attributes. The
following is the attribute presentation suggested by UML :
visibility name: type-expression =initial- value

Where visibility is one of the following:


+ public visibility (accessibility to all classes).
# protected visibility (accessibility to subclasses and operations of the class).
- private visibility (accessibility only to operations of the class).

Type-expression is a language-dependent specification of the implementation type


of an attribute.
Initial-value is a language-dependent expression for the initial value of a newly
created object. The initial value is optional. For example, + size: length = 100

The UML style guidelines recommend beginning attribute names with a


lowercase letter.
In the absence of a multiplicity indicator (array), an attribute holds exactly one
value. Multiplicity may be indicated by placing a multiplicity indicator in brackets after
attribute name; for example,
names[lO]: String
points[2.. *]: Point

The multiplicity of 0..1 provides the possibility of null values: the absence of a
value, as opposed to a particular value from the range. For example, the following
declaration permits a distinction between the null value and an empty string: name[O..lj:
String

REFINING ATTRIBUTES FOR THE VIANET BANK OB.JECTS

In this section, we go through the ViaNet bank ATM system classes and refine the
attributes identified during object-oriented analysis.

Refining Attributes for the BankClient Class

During object-oriented analysis, we identified the following attributes:


firstName
lastN ame
pinNumber
cardNumber

At this stage, we need to add more information to these attributes, such as


visibility and implementation type. Furthermore, additional attributes can be identified
during this phase to enable implementation of the class:
#firstName: String
#lastName: String
#pinNumber: String
#cardNumber: String
#account: Account (instance connection)

In Chapter 8 we identified an association between the BankClient and the


Account classes. (see Figure 3.27 ). To design this association, we need to add an account
attribute of type Account, since the BankClient needs to know about his or her account
and this attribute can provide such information for the BankClient class. This is an
example of instance connection, where it represents the association between the
BankClient and the Account objects. All the attributes have been given protected
visibility.

Refining Attributes for the Account Class


Here is the refined list of attributes for the Account class:
#number: String
#balance: float
#transaction: Transaction (This attribute is needed for implementing the association
between the Account and Transaction classes.)
#bankClient: BankClient (This attribute is needed for implementing the association
between the Account and BankClient classes.)
At this point we must make the Account class very general, so that it can be
reused by the checking and savings accounts.

Refining Attributes for the Transaction Class


The attributes for the Transactionclass are these:
#transID: String
#transDate: Date
#trans Time: Time
#transType: String
#amount: float
#postBalance: float

Refining Attributes for the ATMMachine Class


The ATMMachine class could have the following attributes:
#address: String
#state: String
Refining Attributes for the CheckingAccount Class
Add the savings attribute to the class. The purpose of this attribute is to
implement the association between the CheckingAccount and SavingsAccount classes.
Refining Attributes for the SavingsAccount Class
Add the checking attribute to the class. The purpose of this attribute is to
implement the association between the SavingsAccount and CheckingAccount classes.
Figure 10-1 shows a more complete UML class diagram for the bank system. At this
stage, we also need to add a very short description of each attribute or certain attribute
constraints. For example, Class ATMMachine
#address: String (The address for this ATM machine.)
#state: String (The state of operation for this ATM machine, such as running,
off, idle, out of money, security alarm.)

Fig 4.6 A more complete UML class diagram for the ViaNet bank system.

DESIGNING METHODS AND PROTOCOLS

The main goal of this activity is to specify the algorithm for methods identified so
far. Once you have designed your methods in some formal structure such as UML
activity diagrams with an OCL description, they can be converted to programming
language manually or in automated fashion.

A class can provide several types of methods :


 Constructor. Method that creates instances (objects) of the class.
 Destructor. The method that destroys instances.
 Conversion method. The method that converts a value from one unit of measure to
another.
 Copy method. The method that copies the contents of one instance to another
instance. .
 Attribute set. The method that sets the values of one or more attributes.
 Attribute get. The method that returns the values of one or more attributes.
 I/O methods. The methods that provide or receive data to or from a device.
 Domain specific. The method specific to the application.

Corollary 1, that in designing methods and protocols you must minimize the
complexity of message connections and keep as low as possible the number of messages
sent and received by an object. Your goal should be to maximize cohesiveness among
objects and software components to improve coupling, because only a minimal amount of
essential information should be passed between components. Abstraction leads to
simplicity and straightforwardness and, at the same time, Increases class versatility. The
requirement of simplification, while retaining functionality, seems to lead to increased
utility. Here are five rules :
1. If it looks messy, then it's probably a bad design.
2. If it is too complex, then it's probably a bad design.
3. If it is too big, then it's probably a bad design.
4. If people don't like it, then it's probably a bad design.
5. If it doesn't work, then it's probably a bad design.

DESIGN ISSUES: AVOIDING DESIGN PITFALLS

As described it is important to apply design axioms to avoid common design


problems and pitfalls. For example, we learned that it is much better to have a large set
of simple classes than a few large, complex classes.
A common occurrence is that, in your first attempt, your class might be too big
and therefore more complex than it needs to be. Take the time to apply the design
axioms and corollaries, then critique what you have proposed. You may find you
can gather common pieces of expertise from several classes, which in itself becomes
another "peer" class that the others consult; or you might be able to create a
superclass for several classes that gathers in a single place very similar code. Your
goal should be maximum reuse of what you have to avoid creating new classes as much
as possible.
Take the time to think in this way-good news, this gets easier over time. Lost
object focus is another problem with class definitions.
A meaningful class definition starts out simple and clean but, as time goes on
and changes are made, becomes larger and larger, with the class identity becoming
harder to state concisely (Corollary 2). This happens when you keep making
incremental changes to an existing class. If the class does not quite handle a situation,
someone adds a tweak to its description. When the next problem comes up, another tweak
is added. Or, when a new feature is requested, another tweak is added, and so on.
Apply the design axioms and corollaries, such as Corollary 2 (which states
that each class must have a single, clearly defined purpose). When you document,
you easily should be able to describe the purpose of a class in a few sentences.

Some possible actions to solve this problem are these:


*Keep a careful eye on the class design and make sure that an object's role remains well
defined. If an object loses focus, you need to modify the design. Apply Corollary 2 (single
purpose).
*Move some function into new classes that the object would use. Apply corrolary 1 (
Uncoupled design with less information content).
*Break up the class into 2 or 3 classes. Apply corollary 3 (large number of simple classes).
*Rethink the class definition based on experience gained.
UML Operation Presentation

The following operation presentation has been suggested by the UML. The
operation syntax is this :

visibility name: (parameter list”) : return type expression

Where visibility is one of the following:


+ public visibility (accessibility to all classes).
# protected visibility (accessibility to subclasses and operations of the class).
- private visibility (accessibility only to operations of the class).

Here, The name is the name of the operation.


Parameter list is a list of parameter, separated by commas, each specified by
name:type-expression=default value.
Return-type-expression: is a language-dependent specification of the
implementation of the value returned by the method. If return-type is omitted, the
operation does not return a value.

DESIGNING METHODS FOR THE VIANET BANK OBJECTS.

Fig 4.7An activity diagram for the BankClient class verifyPassword method, using OCl
to describe the diagram. The syntax for describing a class's method is Class
name::methodName.
BankClient Class VerifyPassword Method

The following describes the verifyPassword service in greater detail. A client PIN
code is sent from the ATMMachine object and used as an argument in the verify-
Password method. The verify Password method retrieves the client record and checks the
entered PIN number against the client's PIN number. If they match, it allows the user to
proceed. Otherwise, a message sent to the ATMMachine displays "Incorrect PIN, please
try again" (see Figure 4.7).
The verifyPassword methods' performs first creates a bank client object and
attempts to retrieve the client data based on the supplied card and PIN numbers. At this
stage, we realize that we need to have another method, retrieveClient. The
retrieveClientmethod takes two arguments, the card number and a PIN number, and
returns the client object or "nil" if the password is not valid. We postpone design of the
retrieveClient method to next chapter.

Account Class Deposit Method


The following describes the deposit service in greater detail. An amount to be
deposited is sent to an account object and used as an argument to the deposit service. The
account adjusts its balance to its current balance plus the deposit amount. The account
object records the deposit by creating a transaction object containing the date and time,
posted balance, and transaction type and amount (see Figure 4.8).

Once again we have discovered another method, updateClient. This method, as


the name suggests, updates client data. We postpone design of the updateClient method
to the (designing the access layer classes).

Fig 4.8 An activity diagram for the Account class deposit method.

Account class withdraw method


This is the generic withdrawal method that simply withdraws funds if they are
available. It is designed to be inherited by the CheckingAccount and SavingsAccount
classes to implement automatic funds transfer.
The following describes the withdraw method. An amount to be withdrawn is sent
to an account object and used as the argument to the withdraw service. The account
checks its balance for sufficient funds. If enough funds are available, the account makes
the withdrawal and updates its balance; otherwise, it returns an error, saying "insufficient
funds." If successful, the account records the withdrawal by creating a transaction object
containing date and time, posted balance, and transaction type and amount (see
Figure4.9).

Fig 4.9 An activity diagram for the account class withdraw method.

Account class CreateTransaction Method


The CreateTransaction method generates a record of each transaction performed
against it. The description is as follows. Each time a successful transaction is performed
against an account, the account object creates a transaction object to record it. Arguments
into this service include transaction type (withdrawal or deposit), the transaction amount,
and the balance after the transaction. The account creates a new transaction object and
sets its attributes to the desired information. Add this description to the create
Transaction 's description field (see Figure 4.10).

Fig 4.10An activity diagram for the Account class createTransaction method.
Checking Account Class withdraw method
This is the checking account-specific version of the withdrawal service. It takes
into consideration the possibility of withdrawing excess funds from a companion savings
account. The description is as follows. An amount to be withdrawn is sent to a checking
account and used as the argument to the withdrawal service. If the account has
insufficient funds to cover the amount but has a companion savings account, it tries to
withdraw the excess from there. If the companion account has insufficient funds, this
method returns the appropriate error message. If the companion account has enough
funds, the excess is withdrawn from there, and the checking account balance goes to zero
(0). If successful, the account records the withdrawal by creating a transaction object
containing the date and time, posted balance, and transaction type and amount.

PACKAGES AND MANAGING CLASSES


A package groups and manages the modeling elements, such as classes, their
associations, and their structures. Packages themselves may be nested within other
packages.
A package may contain both other packages and ordinary model elements. The
entire system description can be thought of as a single high-level subsystem package with
everything else in it. All kinds of UML model elements and diagrams can be organized
into packages. For example, some packages may contain groups of classes and their
relationships, subsystems, or models. A package provides a hierarchy of different system
components and can reference other packages.
For example, the bank system can be viewed as a package that contains other
packages, such as Account package, Client package, and so on. Classes can be packaged
based on the services they provide or grouped into the business classes, access classes,
and view classes (see Figure 4.11). Furthermore, since packages own model elements and
model fragments, they can be used by CASE tools as the basic storage and access control.
In Chapter 5, we learned that a package is shown as a large rectangle with a small
rectangular tab. If the contentS' of the package are shown, then the name of the package
may be placed within the tab. A keyword string may be placed above the package name.
The keywords subsystem and model indicate that the package is a meta-model subsystem
or model. The visibility of a package element outside the package may be indicated by
preceding the name of the element by a visibility symbol (+ for public, - for private, # for
protected). If the element is in an inner package, its visibility as exported by the outer
package is obtained by combining the visibility of an element within the package with the
visibility of the package itself: The most restrictive visibility prevails.
Fig 4.11 More complete UML class diagram for the ViaNet bank ATM system.
Note that the method parameter list is not shown.

Relationships may be drawn between package symbols to show relationships


between at least some of the elements in the packages. In particular, dependency between
packages implies one or more dependencies among the elements.
ACCESS LAYER : OBJECT STORAGE & OBJECT INTEROPERABILITY

A DataBase Management System (DBMS) is a set of programs that enables the


creation and maintenance of a collection of related data. A DBMS and associated
programs access, manipulate, protect and manage the data.

The fundamental purpose of a DBMS is to provide a reliable, persistent data


storage facility and mechanisms for efficient, convenient data access and retrieval.

Persistence refers to the ability of some objects to outlive the programs that
created them.

Object lifetimes can be short for local objects (called transient objects) or long
for objects stored indefinitely in a database (called persistent objects).
Most object-oriented languages do not support serialization or object persistence,
which is the process of writing or reading an object to and from a persistence storage
medium, such as disk file.

OBJECT STORE AND PERSISTENCE: AN OVERVIEW

A program will create a large amount of data throughout its execution. Each item
of data will have a different lifetime.
Atkinson et al. describe six broad categories for the lifetime of data:
1. Transient results to the evaluation of expressions.
2. Variables involved in procedure activation (parameters and variables with a
localized scope).
3. Global variables and variables that are dynamically allocated.
4. Data that exist between the executions of a program.
5. Data that exist between the versions of a program.
6. Data that outlive a program.

 The first three categories are transient data, data that cease to exist
beyond the lifetime of the creating process.
 The other three are nontransient, or persistent, data.
 Programming languages provide excellent, integrated support for the first
three categories of transient data.
 The other three categories can be supported by a DBMS, or a file system.

The same issues also apply to objects; after all, objects have a lifetime, too. They
are created explicitly and can exist for a period of time (during the application session).
However, an object can persist beyond application session boundaries, during which the
object is stored in a file or a database. A file or a database can provide a longer life for
objects-longer than the duration of the process in which they were created. From a
language perspective, this characteristic is called persistence. Essential elements in
providing a persistent store are :
 Identification of persistent objects or reachability (object ID).
 Properties of objects and their interconnections. The store must be able to
coherently manage nonpointer and pointer data (i.e., interobject
references).
 Scale of the object store. The object store should provide a conceptually
infinite store.
 Stability. The system should be able to recover from unexpected failures
and return the system to a recent self-consistent state. This is similar to
the reliability requirements of a DBMS, object-oriented or not.

DATABASEMANAGEMENT SYSTEMS

Databases usually are large bodies of data seen as critical resources to a company.
A DBMS is a set of programs that enable the creation and maintenance of a collection of
related data.

DBMSs have a number of properties that distinguish them from the file-based
data management approach.

In traditional file processing, each application defines and implements the files it
requires. Using a database approach, a single repository of data is maintained, which can
be defined once and subsequently accessed by various users (see Figure ).

Fig.4.11: Database system vs file system.


A fundamental characteristic of the database approach is that the DBMS
contains not only the data but a complete definition of the data formats it manages.
This description is known as the schema, or meta-data, and contains a complete
definition of the data formats, such as the data structures, types, and constraints.

In traditional file processing applications, such meta-data usually are encapsulated


in the application programs themselves. In DBMS, the format of the metadata is
independent of any particular application data structure; therefore, it will provide
a generic storage management mechanism. Another advantage of the database
approach is program-data independence. By moving the meta-data into an external
DBMS, a layer of insulation is created between the applications and the stored data
structures. This allows any number of applications to access the data in a simplified and
uniform manner.

Database Views

*. The DBMS provides the database users with a conceptual representation that
is independent of the low-level details (physical view) of how the data are stored.
*. The database can provide an abstract data model that uses logical concepts
such as field, records, and tables and their interrelationships. Such a model is understood
more easily by the user than the low-level storage concepts.
*. This abstract data model also can facilitate multiple views of the same
underlying data.
*. Many applications will use the same shared information but each will be
interested in only a subset of the data.
*. The DBMS can provide multiple virtual views of the data that are tailored to
individual applications. This allows the convenience of a private data representation with
the advantage of globally managed information.

Database Models

A database model is a collection of logical constructs used to represent the data


structure and data relationships within the database.
Database models may be grouped into two categories: conceptual models and
implementation models.
The conceptual model focuses on the logical nature of that data presentation.
Therefore, the conceptual model is concerned with what is represented in the database.
The implementation model is concerned with how it is represented.

Hierarchical Model: The hierarchical model represents data as a single rooted tree.
Each node in the tree represents a data object and the connections represent a parent-
child relationship.
For example, a node might be a record containing information about Motor vehicle and
its child nodes could contain a record about Bus parts (see Figure 4.12).
Fig. 4.12 A hierarchical Model

Network Model : A network database model is its’ record can have more than one
parent. For example, in Figure 4.13 , an Order contains data from the Soup and Customer
nodes.

Fig 4.13: An order contains data from both customer and soup

Relational Model: This database model is the relation, which can be thought of as a
table. The columns of each table are attributes that define the data or value domain for
entries in that column. The rows of each table are tuples representing individual data
objects being stored. A relational table should have only one primary key.
A primary key is a combination of one or more attributes whose value
unambiguously locates each row in the table.
In Figure , Soup-ID, Cust-ill, and Order-ill are primary keys in Soup, Customer, and
Order tables.
A foreign key is a primary key of one table that is embedded in another table to
link the tables. In Figure 4.14 , Soup-ill and Cust-ill are foreign keys in the Order table.
Fig 4.14 : The fig depicts primary and foreign key in a relation database.

Database Interface

The interface on a database must include a data definition language (DDL), a


query, and data manipulation language (DML).
These languages must be designed to fully reflect the flexibility and constraints
inherent in the data model.
Database systems have adopted two approaches for interfaces with the system.
1. Structured query language (SQL) - This approach is a very popular way of
defining and designing a database and its schema, especially with the popularity
of languages such as SQL, which has become an industry standard for defining
databases. The problem with this approach is that application programmers
have to learn and use two different languages.
2. To extend the host programming language with database related constructs.
This is the major approach, since application programmers need to learn only a
new construct of the same language rather than a completely new language. Many of
the currently operational databases and object-oriented database systems have
adopted this approach; a good example is GemStone from Servio Logic, which has
extended the Smalltalk object-oriented programming.

Database Schema and Data Definition Language :


To represent information in a database, a mechanism must exist to describe or
specify to the database the entities of interest.
A data definition language (DDL) is the language used to describe the
structure of and relationships between objects stored in a database. This structure of
information is termed the database schema.
In traditional databases, the schema of a database is the collection of record
types and set types or the collection of relationships, templates, and table records
used to store information about entities of interest to the application.

For example, to create logical structure or schema, the following SQL command
can be used:
CREATE SCHEMA AUTHORIZATION (creator)
CREATE DATABASE (database name)

For example,
CREATE TABLE INVENTORY (Inventory_Number CHAR(10) NOT NULL
DESCRIPTION CHAR(25) NOT NULL PRICE DECIMAL (9, 2));

where the boldface words are SQL keywords.

Data Manipulation Language and Query Capabilities :

A data Manipulation Language ( DML) is the language that allows users to


access and manipulate(such as, create, save, or destroy) data organization.
The structured query language (SQL) is the standard DML for relational DBMSs. SQL
is widely used for its query capabilities. The query usually specifies
*. The domain of the discourse over which to ask the query.
*. The elements of general interest.
*. The conditions or constraints that apply.
*. The ordering, sorting, or grouping of elements and the constraints that, apply to
the ordering or grouping.

Traditionally, DML are either procedural or nonprocedural. A procedural


DML requires users to specify what data are desired and how to get the data. A
nonprocedural DML, like most databases' fourth generation programming
language (4GLs), requires users to specify what data are needed but not how to get
the data.
Object-oriented query and data manipulation languages, such as Object SQL,
provide object management capabilities to the data manipulation language.

In a relational DBMS, the DML is independent of the host programming


language. A host language such as C or COBOL would be used to write the body of the
application. Typically, SQL statements then are embedded in C or COBOL applications
to manipulate data. Once SQL is used to request and retrieve database data, the results of
the SQL retrieval must be transformed into the data structures of the programming
language. A disadvantage of this approach is that programmers code in two languages,
SQL and the host language. Another is that the structural transformation is required in
both database access directions, to and from the database.
LOGICAL AND PHYSICAL DATABASE ORGANIZATION AND ACCESS
CONTROL
Logical database organization refers to the conceptual view of database structure
and the relationships within the database. For example, object-oriented systems represent
databases composed of objects, and many allow multiple databases to share information
by defining the same object.
Physical database organization refers to how the logical components of the
database are represented in a physical form by operating system constructs (i.e., objects
may be represented as files).

Shareability and Transactions

Data and objects in the database often need to be accessed and shared by different
applications. With multiple applications having access to the object concurrently, it is
likely that conflicts over object access will arise. The database then must detect and
mediate these conflicts and promote the greatest amount of sharing possible without
sacrificing the integrity of data. This mediation process is managed through concurrency
control policies, implemented, in part, by transactions.

A transaction is a unit of change in which many individual modifications are


aggregated into a single modification that occurs in its entirety or not at all. Thus, either
all changes to objects within a given transaction are applied to the database or none of the
changes. A transaction is said to commit if all changes can be made successfully to the
database and to abort if canceled because all changes to the database cannot be made
successfully. This ability of transactions ensures atomicity of change that maintains the
database in a consistent state.

Concurrency Policy

*. When several users (or applications) attempt to read and write the same object
simultaneously, they create a contention for object.

*. The concurrency control mechanism is established to mediate such conflicts by


making policies that dictate how they will be handled.

*. A basic goal of the transaction is to provide each user with a consistent view
of the database. This means that transactions must occur in serial order.

*. The most conservative way to enforce serialization is to allow a user to lock


all objects or records when they are accessed and to release the locks only
after a transaction commits. This approach, traditionally known as a
conservative or pessimistic policy, provides exclusive access to the object,
despite what is done to it.

*. Under an optimistic policy, two conflicting transactions are compared in their


entirety and then their serial ordering is determined. As long as the database is
able to serialize them so that all the objects viewed by each transaction are from a
consistent state of the database, both can continue even though they have read and
write locks on a shared object.

*. Thus, a process can be allowed to obtain a read lock on an object already write
locked if its entire transaction can be serialized as if it occurred either entirely
before or entirely after the conflicting transaction. The reverse also is true:

*. A process may be allowed to obtain a write lock on an object that has a read
lock if its entire transaction can be serialized as if it occurred after the conflicting
transaction. In such cases, the optimistic policy allows more processes to operate
concurrently than the conservative policy.

DISTRIBUTED DATABABSES AND CLIENT-SERVER COMPUTING

Many modern databases are distributed databases, which imply that portions of
the database reside on different nodes (computers) and disk drives in the network.
Usually, each portion of the database is managed by a server, a process responsible for
controlling access and retrieval of data from the database portion.
The server dispenses information to client applications and makes queries or data
requests to these client applications or other servers.
Clients generally reside on nodes in the network other than those on which the
servers execute. However, both can reside on the same node, too.

What Is Client-Server Computing?

*. Client-Server computing is the logical extension of modular programming.


*. The fundamental assumption of modular programming is that separation of a
large piece of software into its constituent parts (“modules”) creates the possibility for
easier development and better maintainability.
*.Client-server computing extends this theory a step further by recognizing that
all those modules need not be executed within the same memory space or even on the
same machine. With this architecture, the calling module becomes the “client” (that
which requests a service) and the called module becomes the “server” (that which
provides the service).
*. Another important component of client-server computing is connectivity,
which allows applications to communicate transparently with other programs or
processes, regardless of their locations. The key element of connectivity is the Network
Operating System (NOS), also known as middleware. The NOS provides services such
as routing, distribution, messages, filing and printing, and network management.
*. The client is a process (program) that sends a message to a server process
(program) requesting that the server perform a task (service).
*. Client programs usually manage the user interface portion of the application,
validate data entered by the user, dispatch requests to server programs, and sometimes
execute business logic.
*. The business layer contains all the objects that represent the business (real
objects), such as Order, Customer, Lineitem, Inventory.
*. The client-based process is the front-end of the application, which the user sees
and interacts with.
*. The client process contains solution-specific logic and provides the interface
between the user and the rest if the application system. It also manages the local
resources with which the user interacts, such as the monitor, keyboard, workstation, CPU,
and peripherals.
*. A key component of a client workstation is the graphical user interface (GUI),
which normally is a part of the operating system (i.e., the Windows manager). It is
responsible for detecting user actions, managing the Windows on the display, and
displaying the data in the Windows.
*. A server process (program) fulfills the client request by performing the task
requested.
*. Server programs generally receive requests from client programs, execute
database retrieval and updates, manage data integrity, and dispatch responses to client
requests.
*. Sometimes, server programs execute common or complex business logic. The
server-based process “may” run on another machine on the network. This server could be
the host operating system or network file server; the server then is provided both file
system services and application services.
*. In some cases, another desktop machine provide the application services. Their
server process acts as a software engine that manages shared resources such as databases,
printers, communication links, or high-powered processors. The server process performs
the back-end tasks that are common to similar applications.

*. The server can take different forms. The simplest form of server is a file
server.
*. With a file server, the client passes requests for files or file records over a
network to the file server. This form of data service requires large bandwidth (the range
of data that can be sent over a given medium simultaneously) and can considerably slow
down a network with many users.
*. Traditional LAN computing allows users to share resources, such as data files
and peripheral devices.
*. More advanced forms of servers are database servers, transaction servers,
application servers, and more recently object servers.
*. With database servers, clients pass SQL requests as message to the server and
the results of the query are returned over the network. Both the code that process the SQL
request and the data reside on the server, allowing it to use its own processing power to
find the requested data. This is in contrast to the file server, which requires passing all the
records back to the client and then letting the client find its own data.
*. With transaction servers, clients invoke remote procedures that reside on
servers, which also contain an SQL database engine. The server has procedural
statements to execute a group of SQL statements (transactions), which either all succeed
or fail as a unit.
*. The applications based on transaction servers, handled by on-line transaction
processing (OLTP) tend to be mission-critical applications that always require a 1-3
second response time and tight control over the security and the integrity of the database.
The communication overhead of a single request and reply (as opposed to multiple SQL
statements in database servers).
*. Application servers are not necessarily database centered but are used to serve
user needs, such as downloading capabilities from Dow Jones or regulating an electronic
mail process. Basing resources on a server allows users to share data, while security and
management services, also based on the server, ensure data integrity and security.
* The logical extension of this is to have clients and servers running on the
appropriate hardware and software platforms for their functions. For example, database
management system servers should run on platforms especially with special elements for
managing files.
*. In a two-tier architecture, a client talks directly to a server, with no
intervening server. This type of architecture typically is used in small environments with
less than 50 users. A common error in client-server development is to prepare a prototype
of an application in a small two-tier environment then scale up by simply adding more
users to the server. This approach usually will result in an ineffective system, as the
server becomes overwhelmed. To properly scale up to hundreds or thousands of users, it
usually is necessary to move to three-tier architecture.
*. A three-tier architecture introduces a server (application or Web server)
between the client and the server. The role of the application or Web server is manifold.
It can provide translation services (as in adapting a legacy application on a mainframe to
a client-server environment), meeting services ( as in acting as a transaction monitor to
limit the number of simultaneous requests to a given server), or intelligent agent services
(as in mapping a request to a number of different servers, collating the results, and
returning a single response to the client).

Ravi Kalakota describes the basic characteristics of client-server


architectures as follows:
1. A combination of a client or front-end portion that interacts with the user and a
server or back end portion that interacts with the shared resource. The client
process contains solution-specific logic and provides the interface between the
user and the rest of the application system. The server process acts as a software
engine that manages shared resources such as databases, printer, modems, or
high-powered processors.
2. The front-end task and back-end task have fundamentally different
requirements for computing resources such as processor speeds, memory, disk
speeds and capacities, and input/output devices.
3. The environment is typically heterogeneous and multivendor. The hardware
platform and operating system of client and server are not usually the same. Client
and server processes communicate through a well-defines set of standard
application program interfaces(APIs)
4. An important characteristic of client-server systems is scalability. They can be
scaled horizontally or vertically. Horizontal scaling means adding or removing
client workstations with only a slight performance impact. Vertical scaling means
migrating to a larger and faster server machine or multi-servers.

Client-server and distributed computing have arisen because of a change in business


needs. Unfortunately, most business have existing systems, based on older technology,
that must be incorporated into the new, integrated environment; that is, mainframes with
a great deal of legacy (older application) software.

A typical client-server application consists of the following components:

1. User interface. This major component of the client-server application interacts with
users, screens, windows, Windows managements, keyboard, and mouse handling.
2. Business processing. This part of the application uses the user interface data to
perform business tasks. In this book, we look at how to develop this component by
utilizing an object-oriented technology.
3. Database Processing. This part of the application code manipulates data within the
application. The data are managed by a database management system, object oriented or
not. Data manipulation language , such as SQL or a dialect of SQL (perhaps, an object –
oriented query language). Ideally , the DBMS processing is transparent to the business
processing layer of the application.

The development and implementation of client-server computing is more


complex, more difficult, and more expensive that traditional, single process applications.
However, utilizing an object-oriented methodology, we can manage the complexity of
client-server applications.

DISTRIBUTED AND COOPERATIVE PROCESSING

The distributed processing means distribution of applications and business


logic across multiple processing platforms.
*. Distributed processing implies that processing will occur on more than one
processor in order for a transaction to be completed.
*. In other words, processing is distributed across two or more machines, where
each process performs part of an application in a sequence. These processes may not run
at the same time. For example, in processing an order from a client, the client information
may process at one machine and the account information then may process on a different
machine.
*. Often, the object used in a distributed processing environment also is
distributed across platforms.
*. Cooperative processing is computing that requires two or more distinct
processors to complete a single transaction.
*. Cooperative processing is related to both distributed and client-server
processing. Cooperative is a form of distributed computing in which two or more distinct
processes are required to complete a single business transaction.
*. Usually, these programs interact and execute concurrently on different
processors.
*. Cooperative processing also can be considered to be a style of distributed
processing, if communication between processors is performed through a message-
passing architecture.

DISTRIBUTED OBJECTS COMPUTING : THE NEXT GENERATION OF


CLIENT-SERVER COMPUTING

Software technology is in the midst of a major computational shift toward


distributed object computing (DOC). Distributed computing is poised for a second
generation client-server era. In this new client-server model, servers are plentiful
instead of scarce (because every client can be a server) and proximity no longer
matters.
In the first generation client-server era, which still is very much is progress,
SQL database, transaction processing (TP) monitors, and groupware have begun to
displace file servers as client-server application models.
In the new client-server era, distributed object technology is expected to
dominate other client-server application models.
Distributed object computing promises the most flexible client-server systems,
because it utilized reusable software components that can roam anywhere on networks,
run on different platforms, communicate with legacy applications by means of object
wrappers, and manage themselves and the resources they control. Objects can help break
monolithic applications into more manageable components that coexist on the expanded
bus.
Distributed objects are reusable software components that can be distributed
and accessed by users across the network. These objects can be assembled into
distributed applications. Distributed object computing introduces a higher level of
abstraction into the world of distributed applications. Applications no longer need to
know which server process performs a given function. All information about the function
is hidden inside the encapsulated object. A message requesting an operation is sent to the
object, and the appropriate method is invoked.
Distributed object computing will be the key part of tomorrow’s information
systems. DOC resulted from the need to integrate mission-critical applications and
data residing on systems that are geographically remote, sometimes from users and
often from each other, and running on many different hardware platforms.
Furthermore, the information systems must link applications developed in different
languages, use data from object and relational databases and from mainframe systems,
and he optimized from use across the Internet and thorough departmental intranets.
Historically, business have had to integrate applications and data by writing custom
interfaces between systems, forcing developers to spend their time building and
maintaining an infrastructure rather than adding new business functionality.
Distributed object technology has been tied to standards from the early stage.
Since 1989, the Object Management Group (OMG), with over 500 member
companies, has been specifying the architecture for an open software bus on which object
components written by different vendors can operate across networks and operating
systems. The OMG and the object bus are well on their way to becoming the universal
client-server middleware.
Currently, there are several competing DOC standards, including the object
Management Group’s COBRA, OpenDoc, standards, and Microsoft’s
ActiveX/DCOM. Although DOC technology offers unprecedented computing power,
few organizations have been able to harness it as yet. The main reasons commonly cited
for slow adoption of DOC include closed legacy architecture, incompatible protocols,
inadequate network bandwidths, and security issues. In the next subsections, we look at
Microsoft’s DCOM and OMG’s CORBA.

Common Object Request Broker Architecture


Many Organizations are now adopting the object Management Group’s Common
object request broker architecture (CORBA), a standard proposed as a means to
integrate distributed, heterogeneous business applications and data.

The CORBA interface definition language (IDL) allows developers to specify


language-neutral, object-oriented interfaces for application and system
components.

IDL Definitions are stored in an interface repository, a sort of phone book that
offers object interfaces and services. For distributed enterprise computing, the
interface repository is central to communication among objects located on
different systems.

CORBA object request brokers (ORBs) implement a communication channel


though which applications can access object interfaces and request data and services. The
CORBA common object environment (COE) provides system-level services such as life
cycle management for objects accessed through CORBA, event notification between
objects, and transaction and concurrency control.

Microsoft’s ActiveX/DCOM
Microsoft’s component object model (COM) and its successor the distributed
component object model (DCOM) are Microsoft’s alternatives to OMG’s distributed
object architecture CORBA.
Microsoft and the OMG are competitors, and few can say for sure which
technology will win the challenge.
Although CORBA benefits from wide industry support, DCOM is supported
mostly by one enterprise, Microsoft.
However, Microsoft is no small business concern and hold firmly a huge part of
the microcomputer population, so DCOM has appeared a very serious competitor to
CORBA. DCOM was bundled with Windows NT 4.0 and there is a good chance to see
DCOM in all forthcoming Microsoft products.
The distributed component object model, Microsoft’s alternative to OMG’s
CORBA, is an Internet and component strategy where ActiveX (formerly known as
object linking and embedding, or OLE) plays the role DCOM object. DCOM also is
backed by a very efficient Web browser, the Microsoft Internet Explorer.

OB.JECT.ORIENTED DATABASE MANAGEMENT SYSTEMS: THE


PURE WORLD
The object-oriented database management system is a marriage of object
oriented programming and database technology (see Figure 17) to provide what we now
call object oriented databases.
Additionally, object-oriented databases allow all the benefits of an object
orientation as well as the ability to have a strong equivalence with object-oriented
programs, an equivalence that would be lost if an alternative were chosen, as with a
purely relational database.
By combining object-oriented programming with database technology, we have
an integrated application development system, a significant characteristic of object-
oriented database technology.
Many advantages accrue from including the definition of operations with the
definition of data.
1. The defined operations apply universally and are not dependent on the
particular database application running at the moment.
2. The data types can be extended to support complex data such as multimedia
by defining new object classes that have operations to support the new kinds of
information.

Fig 4.17 The object-oriented database management system is a marriage of


object-oriented programming and database technology.

The "Object-Oriented Database System Manifesto" by Malcom Atkinson et al.


described the necessary characteristics that a system must satisfy to be considered an
object oriented database. These categories can be broadly divided into object-oriented
language properties and database requirements.

First, the rules that make it an object-oriented system are as follows:

1. The system must support complex objects. A system must provide simple atomic types
of objects (integers, characters, etc.) from which complex objects can be built by applying
constructors to atomic objects or other complex objects or both.
2. Object identity must be supported. A data object must have an identity and
existence independent of its values.
3. Objects must be encapsulated. An object must encapsulate both a program and its
data. Encapsulation embodies the separation of interface and implementation and the
need for modularity.
4. The system must support types or classes. The system must support either the type
concept (embodied by C++ ) or the class concept (embodied by Smalltalk).
5. The system must support inheritance. Classes and types can participate in a class
hierarchy. The primary advantage of inheritance is that it factors out shared code and
interfaces.
6. The system must avoid premature binding. This feature also is known as late binding
or dynamic binding Since classes and types support encapsulation and inheritance, the
system must resolve conflicts in operation names at run time.
7. The system must be computationally complete. Any computable function should be
expressible in the data manipulation language (DML) of the system, thereby allowing
expression of any type of operation.
8. The system must be extensible. The user of the system should be able to create new
types that have equal status to the system's predefined types. These requirements are met
by most modem object-oriented programming languages such as Smalltalk and C+ +.
Also, clearly, these requirements are not met directly (more on this in the next section) by
traditional relational, hierarchical, or network database systems. Second, these rules make
it a DBMS:
9. It must be persistent, able to remember an object state. The system must allow the
programmer to have data survive beyond the execution of the creating process for it to be
reused in another process.
10. It must be able to manage very large databases. The system must efficiently manage
access to the secondary storage and provide performance features, such as indexing,
clustering, buffering, and query optimization.
11. It must accept concurrent users. The system must allow multiple concurrent users
and support the notions of atomic, serializable transactions.
12. It must be able to recover from hardware and software failures. The system must be
able to recover from software and hardware failures and return to a coherent state.
13. Data query must be simple. The system must provide some high-level mechanism for
ad-hoc browsing of the contents of the database. A graphical browser might fulfill this
requirement sufficiently. These database requirements are met by the majority of existing
database systems. From these two sets of definitions it can be argued that an OODBMS is
a DBMS with an underlying object-oriented model.
4.17.1 Object-Oriented Databases versus Traditional Databases

*. The scope of the responsibility of an OODBMS includes definition of the


object structures, object manipulation, and recovery, which is the ability to maintain data
integrity regardless of system, network, or media failure.
*. Furthermore, OODBMSs like DBMSs must allow for sharing; secure,
concurrent multiuser access; and efficient, reliable system performance.

Traditional / Relational Obj-Oriented data base


database
1 Records play passive role. These databases are derived from the
object’s ability to interact with other objects
itself. The objects are an “active”
component in an o-o database,
2. Relational Database systems do It represent relationships explicitly, support
not explicitly provide inheritance both navigational and associative access to
of attributes and methods. information. So data access performance is
improved.
3. This is purely value-oriented They allow representation and storage of
approach. data in the form of objects. Each object has
its own identity, or object-ID. The object
identity is independent of the state of the
object.

All these advantages point to the application of object-oriented databases to information


management problems that are characterized by the need to manage

*. A large number of different data types.


*..A large number of relationships between the objects.
*. Objects with complex behaviors.

OBJECT – RELATIONAL SYSTEMS: THE PRACTIVAL WORLD.

Many applications increasingly are developed in an o-o-programming


technology, chances are good that the data those applications need to access live in a very
different universe – a relational database. In such environment, the introduction of o-o-
development creates a fundamental mismatch between the programming model (objects)
and the way in which existing data are stored.
*. To resolve the mismatch, a mapping tool between the application objects and
the relational data must be established.
*. Creating an object model from the existing relational database layout (schema )
is referred to as Reverse engineering.
*. Creating a relational schema from an existing object model is referred to as
forward engineering.
*. In Practice, over the life cycle of an application, forward and reverse
engineering need to be combined in an iterative process to maintain the relationship
between the object and relational data representations.
*. Tools that can be used to establish the object-relational mapping processes
have begun to emerge. The main process in relational and object integration is defining
the relationships between the table structures in the relational database with classes in the
object model.

OBJECT-RELATION MAPPING

*. In a relational database, the schema is made up of tables, consisting of rows and


columns, where each column has a name and a simple data type.
*. In an object model, the counterpart to a table is a class (or classes), which has a
set of attributes (properties or data members). Object classes describe behavior with
methods. A tuple (row) of a table contains data for a single entity that correlates to an
object (instance of a class) in an object-oriented system.
*. In addition, a stored procedure in a relational database may correlate to a
method in an object-oriented architecture. A stored procedure is a module of
precompiled SQL code maintained within the database that executes on the server to
enforce rules the business has set about the data.
*. Therefore, the mappings essential to object and relational integration are
between a table and a class, between columns and attributes, between a row and an
object, and between a stored procedure and a method.
For a tool to be able to define how relational data maps to and from application
objects, it must have at least the following mapping capabilities (note all these are two-
way mappings, meaning they map from the relational system to the object and from the
object back to the relational system):
1. Table-class mapping.
2. Table-multiple classes mapping.
3. Table-inherited classes mapping.
4. Tables-inherited classes mapping.
Furthermore, in addition to mapping column values, the tool must be capable of
interpretation of relational foreign keys. The tool must describe both how the foreign key
can be used to navigate among classes and instances in the mapped object model and how
referential integrity is maintained. Referential integrity means making sure that a
dependent table's foreign key contains a value that refers to an existing valid tuple in
another relation.

TABLE-CLASS MAPPING

Table-class mapping is a simple one-to-one mapping of a table to a class and the


mapping of columns in a table to properties in a class. In this mapping, a single table is
mapped to a single class, as shown in Figure 4.18
In such mapping, it is common to map all the columns to properties. However, this is not
required, and it may be more efficient to map only those columns for which an object
model is required by the application(s).

Fig 4.19 Table-multiple classes mapping. The custiD column provides the discriminant.
If the value for custlD is null, an Employee instance is created at run time; otherwise, a
Customer instance is created.

In this approach, each row in the table represents an object instance and each column in
the table corresponds to an object attribute. This one-to-one mapping of the table-class
approach provides a literal translation between a relational data representation and an
application object. It is appealing in its simplicity but offers little flexibility.

TABLE-MULTIPLE CLASSES MAPPING

In the table-multiple classes mapping, a single table maps to multiple


noninheriting classes. Two or more distinct, noninheriting classes have properties that are
mapped to columns in a single table. At run time, a mapped table row is accessed as an
instance of one of the classes, based on a column value in the table .
In Figure 4.20 , the custiD column provides the discriminant. If the value for
custID is null, an Employee instance is created at run time; otherwise, a Customer
instance is created.
4.20 : Table –multiple classes mapping

Table-inherited Classes mapping

In table-inherited classes mapping, a single table maps to many classes that have
a common superclass. This mapping allows the user to specify the columns to be shared
among the related classes. The superclass may be either abstract or instantiated.
In Figure 4.21 , instances of salariedEmployee can be created for any row in the
Person table that has a non null value for the Salary column. If Salary is null, the row is
represented by an hourly Employee instance.

Fig 4.21: Table Inherited classes mapping


Table-inherited Classes mapping.
Another approach here is tables-inherited classes mapping, which allows the
translation of is-a relationships that exist among tables in the relational schema into
class inheritance relationships in the object model.
In a relational database, an is-a relationship often is modeled by a primary key
that acts as a foreign key to another table. In the object model, is-a is another term for an
inheritance relationship.
By using the inheritance relationship in the object model, the mapping can
express a richer and clearer definition of the relationships than is possible in the relational
schema.

Fig 4.22 Tables-inherited classes mapping. Instances of Person are mapped directly from
the Person table. However, instances of Employee can be created only for the rows in the
Employee table (the joining of the Employee and Person tables on the ssn key).The ssn is
used both as a primary key on the Person table and.as a foreign key on the Person table
and a primary key on the Employee table for activating instances of type Employee.

Figure 4.22 shows an example that maps a Person table to class Person and then
maps a related Employee table to class Employee, which is a subclass of class Person. In
this example, instances of Person are mapped directly from the Person table. However,
instances of Employee can be created only for the rows in the Employee table ( the
joining of the Employee and Person tables on the SSN key). Furthermore, SSN is used
both as a primary key on the Person table for activating instances of Person and as a
foreign key on the Person table and a primary key on the Employee table for activating
instances of type Employee.
Keys for Instance Navigation

In mapping columns to properties, the simplest approach is to translate a column's


value into the corresponding class property value. There are two interpretations of this
mapping: Either the column is a data value or it defines a navigable relationship between
instances (i.e., a foreign key). The mapping also should specify how to convert each data
value into a property value on an instance.
In addition to simple data conversion, mapping of column values defines the
interpretation of relational foreign keys. The mapping describes both how the foreign key
can be used to navigate among classes and instances in the mapped object model and how
referential integrity is maintained. A foreign key defines a relationship between tables in
a relational database.
In an object model, this association is where objects can have references to other
objects that enable instance to instance navigation.

MULTIDATABASE SYSTEMS

• A different approach for integration object-oriented applications with relational


data environments is multidatabase systems or heterogeneous database systems,
which facilitate the integration of heterogeneous databases and other information
sources.
• Heterogeneous information systems facilitate the integration of heterogeneous
information sources, where they can be structured (having regular schema), semi-
structured and sometimes even unstructured. Some heterogeneous information
systems are constructed on a global schema over several databases. So users can
have the benefits of a database with a schema to access data stored in different
databases and cross database functionality. Such heterogeneous information
systems are referred to as federated multidatabase systems.
Federated Multi Data Base
Federated multidatabase systems provide uniform access to data stored in multiple
databases that involve several different data models. A multidatabase system (MDBS) is
a database system that resides unobtrusively on top of, existing relational and object
databases and file systems (local database systems) and presents a single database illusion
to its users. The MDBS maintains a single global database schema and local database
systems maintain all user data. The schematic differences among local databases are
handled by neutralization (homogenization), the process of consolidating the local
schemata.

MultiDatabase Systems (MDBS)


• The MDBS translates the global queries and updates for dispatch to the
appropriate local database system for actual processing, merges the results from
them and generates the final result for the user. MDBS coordinates the
committing and aborting of global transactions by the local database systems that
processed them to maintain the consistency of the data within the local databases.
An MDBS controls multiple gateways (or drivers). It manages local databases
through gateways, one gateway for each local database.
To summarize the distinctive characteristic of MDBS
*. Automatic generation of a unified global database schema from local databases, in
addition to schema capturing and mapping for local databases.
*. Provision of cross-database functionality by using unified schemata
*. Integration of heterogeneous database systems with multiple databases.
*. Integration of data types other than relational data through the use of such tools as
driver generators.
*. Provision of a uniform but diverse set of interfaces to access and manipulate data
stored in local databases.

OPEN DATABASE CONNECTIVITY : MULTIDATABASE APPLICATION


PROGRAMMING INTERFACES

• Open Data Base Connective (ODBC) is an application programming interface that


provides solutions to the multidatabase programming problem. It provides a
vendor-neutral mechanism for independently accessing multiple database hosts.
• ODBC and other APIs provide standard database access through a commonclient-
side interface. It avoids the burden of learning multiple database APIs. Here one
can store data for various applications or data from different sources in any
database and transparently access or combing the data on an as needed basis.
Details of back-end data structure are hidden from the user.

ODBC is similar to Windows print model, where the application developer writes to
a generic printer interface and a loadable driver maps that logic to hardware-
specific commands. This approach virtualizes the target printer or DBMS because
the person with the specialized knowledge to make the application logic work with the
printer or database is the driver developer and not the application programmer. The
application interacts with the ODBC driver manager, which sends the application
calls (such as SQL statements) to the database. The driver manager loads and unloads
drivers, perform status checks and manages multiple connections between applications
and data sources

Refer Text Book –page no. 262 – 263

Designing Access Layer Classes

The main idea behind creating an access layer is to create a set of classes that
know how to communicate with the place(s) where the data actually reside. Regardless of
where the data reside whether it be a file, relational database, mainframe, Internet,
DCOM or via ORB, the access classes must be able to translate any data-related requests
from the business layer into the appropriate protocol for data access. These classes also
must be able to translate the data retrieved back into the appropriate business objects. The
access layer‘s main responsibility is to provide a link between business or view objects
and data storage. Three-layer architecture is similar to 3-tier architecture. The view layer
corresponds to the client tier, the business layer to the application server tier and the
access layer performs two major tasks:
 Translate the request: The access layer must be able to translate any data related
requests from the business layer into the appropriate protocol for data access.

 Translate the results: The access layer also must be able to translate the data
retrieved back into the appropriate business objects and pass those objects back
into the business layer.

 Here design is tied to any base engine or distributed object technology such as
CORBA or DCOM. Here we can switch easily from one database to another with
no major changes to the user interface or business layer objects. All we need to
change are the access classes‘ methods.
• Unlike object oriented DBMS systems, the persistent object stores do not
support query or interactive user interface facilities.
• Controlling concurrent access by users, providing ad-hoc query capability and
allowing independent control over the physical location of data are not possible with
persistent objects.
• The access layer (AL), which is a key part of every n-tier system, is mainly
consist of a simple set of code that does basic interactions with the database or any other
storage device. These functionalities are often referred to as CRUD (Create, Retrieve,
Update, and Delete).
• The data access layer need to be generic, simple, quick and efficient as much as
possible. It should not include complex application/ business logics.
• I have seen systems with lengthy, complex store procedures (SP), which run
through several cases before doing a simple retrieval. They contain not only most part of
the business logic, but application logic and user interface logic as well. If SP is getting
longer and complicated, then it is a good indication that you are burring your business
logic inside the data access layer.

Refer Text Book - page no – 264 -268.


VIEW LAYER : DESIGNING INTERFACE OBJECTS

DESIGING VIEW LAYER CLASSES

The view layer objects are responsible for two major aspects of the applications:
1. Input-responding to user interaction. The user interface must be designed to translate
an action by the user, such as clicking on a button or selecting from a menu, into an
appropriate response. That response may be to open or close another interface or to send
a message down into the business layer to start some business process. Remember, the
business logic does not exist here, just the knowledge of which message to send to which
business object.
2. Output-displaying or printing business objects. This layer must paint the best picture
possible of the business objects for the user. In one interface, this may mean entry fields
and list boxes to display an order and its items. In another, it may be a graph of the total
price of a customer's orders.

The process of designing view layer classes is divided into four major activities:

1. The macro level VI design process-identifying view layer objects.


This activity, for the most part, takes place during the analysis phase of system
development. The main objective of the macro process is to identify classes that
interact with human actors by analyzing the use cases developed in the 'analysis
phase. These use cases should capture a complete, unambiguous, and consistent
picture of the interface requirements of the system.

After all, use cases concentrate on describing what the system does rather than how it
does it by separating the behavior of a system from the way it is implemented, which
requires viewing the system from the user's perspective rather than that of the machine.
However, in this phase, we also need to address the issue of how the interface must be
implemented. Sequence or collaboration diagrams can help by allowing us to zoom in on
the actor-system interaction and extrapolate interface classes that interact with human
actors; thus, assisting us in identifying and gathering the requirements for the view layer
objects and designing them.

2. Micro level VI design activities:


Designing the view layer objects by applying design axioms and corollaries.
In designing view layer objects, decide how to use and extend the
components so they best support application-specific functions and provide the
most usable interface.
Prototyping the view layer interface. After defining a design model, prepare a
prototype of some of the basic aspects of the design. Prototyping is particularly useful
early in the design process.

3. Testing usability and user satisfaction. "We must test the application to make sure it
meets the audience requirements. To ensure user satisfaction, we must measure user
satisfaction and its usability along the way as the UI design takes form. Usability experts
agree that usability evaluation should be part of the development process rather than a
post-mortem or forensic activity. Despite the importance of usability and user
satisfaction, many system developers still fail to pay adequate attention to usability,
focusing primarily on functionality" . In too many cases, usability still is not given
adequate consideration.

Macro-level process: identifying view classes by analyzing use cases

The interface object handles all communication with the actor but processes no
business rules or object storage activities. In essence, the interface object will Effective
interface design is more than just following a set of rules. It also involves early planning
of the interface and continued work through the software development process. The
process of designing the user interface involves can fying the specific needs of the
application, identifying the use cases and interface object and then devising a design that
best meets users' needs. The remainder of this chapter describes the micro-level VI
design process and the issues involved.

Fig 4.23 The macro level design process.


Questions

Part-A
Q.No Questions Competence BT Level
BTL 4
1. Compare Coupling and Cohesion. Analysis
BTL 1
2. List out the types of Database models. Remember
BTL 1
3. Define Corollaries. Remember
BTL 1
4. List out the types of attributes. Remember
BTL 1
5. Define DDL and DML. Remember
BTL 1
6. Define Axiom. Remember
BTL 1
7. What is meant by Coupling? Remember
BTL 1
8. What is OCL? Remember
BTL 4
9. Analyze the purpose of DBMS. Analysis
BTL 1
10. List out the various Visibility modes. Remember

Part-B
Q.No Questions Competence BT Level
BTL 1
1. Elaborate Object Oriented Design process in detail. Remember
BTL 1
2. Explain the steps involved in designing the access layer classes. Remember

Explain the activities involved in the macro and micro level BTL 1
3. Remember
processes while designing the view layer classes.
BTL 1
4. Explain about Corollaries in detail. Remember
Discuss in detail about BTL 2
5. (i)Client Server computing Understand
(ii)Distributed Database
(i)Compare Traditional Database and Object Oriented BTL 4
Database.
6. Analysis
(ii)Analyze the Characteristics of OOD

BTL 1
7. Explain OODBMS in detail. Remember

You might also like