Websphere Version 4 Application Developm
Websphere Version 4 Application Developm
WebSphere Version 4
Application Development
Handbook
Complete guide for WebSphere
application development
Ueli Wahli
Alex Matthews
Paula Coll Lapido
Jean-Pierre Norguet
ibm.com/redbooks
International Technical Support Organization
September 2001
SG24-6134-00
Take Note! Before using this information and the product it supports, be sure to read the
general information in “Special notices” on page 569.
This edition applies to Version 4 of IBM WebSphere Application Server, WebSphere Studio, and
VisualAge for Java, for use with the Windows NT and Windows 2000 Operating System.
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the
information in any way it believes appropriate without incurring any obligation to you.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Special notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
IBM trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Part 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Contents v
Form beans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Custom tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Internationalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Code dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Downsides. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
WebSphere Business Components Composer . . . . . . . . . . . . . . . . . . . . 165
When to use WSBCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Deployment and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
WSBCC elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Contents vii
Exporting the code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Exporting the EJB code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
EJB deployment tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Debugging in VisualAge for Java. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Contents ix
Installing new applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Uninstalling applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Setting up resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Web server plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Application client resource configuration tool. . . . . . . . . . . . . . . . . . . . . . 458
Other tools in the Advanced Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
XMLConfig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
WSCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Performing a unit test: executing the application . . . . . . . . . . . . . . . . . . . 463
Launching the Web application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Launching the client application with the launchClient tool . . . . . . . . . . . . 464
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
Contents xi
xii WebSphere Version 4 Application Development Handbook
Preface
The target audience for this book includes team leaders and developers who are
setting up a new J2EE development project using WebSphere Application Server
and related tools. It also includes developers with experience of earlier versions
of the WebSphere product, who are looking to migrate to the Version 4
environment.
This book is split into four parts, starting with an introduction, which is followed by
parts presenting topics relating to the high-level development activities of
analysis and design, code, and unit test. A common theme running through all
parts of the book is the use of tooling and automation to improve productivity and
streamline the development process.
► In Part 1 we introduce the WebSphere programming model, the application
development tools, and the example application we use in our discussions.
► In Part 2 we cover the analysis and design process, from requirements
modeling through object modeling and code generation to the usage of
frameworks.
► In Part 3 we cover coding and building an application using the Java 2
Software Development Kit, WebSphere Studio Version 4, and VisualAge for
Java Version 4. We touch on Software Configuration Management using
Rational ClearCase and provide coding guidelines for WebSphere
applications. We also cover coding using frameworks, such as Jakarta Struts
and WebSphere Business Components.
► In Part 4 we cover application testing from simple unit testing through
application assembly and deployment to debugging and tracing. We also
investigate how unit testing can be automated using JUnit.
IBM trademarks
The following terms are trademarks of the International Business Machines
Corporation in the United States and/or other countries:
e (logo)® Redbooks
IBM ® Redbooks Logo
AIX Alphaworks
CICS CT
DB2 OS/390
S/390 Tivoli
VisualAge WebSphere
Wizard
Comments welcome
Your comments are important to us!
Preface xv
xvi WebSphere Version 4 Application Development Handbook
Part 1
Part 1 Introduction
In this part we introduce information used throughout the rest of the book. In
particular we describe:
► The programming model and application architecture appropriate for a
WebSphere application
► The new features in the latest releases of WebSphere Application Server,
WebSphere Studio and VisualAge for Java
► The PiggyBank application used to illustrate our examples
We will discuss each in terms of its features, along with advantages and
disadvantages to consider when making a decision about which pattern is most
appropriate for your application. Of course, any large system will likely use all of
the patterns discussed here, so understanding the trade-offs and when that
pattern best applies is key to choosing the application architecture.
The primary purpose of the Web browser is to display data generated by the
Web application server components and then trigger application events on behalf
of the user through HTTP requests. The data roughly corresponds to the static
model associated with the application flow model states.
Figure 1-1 shows the relationship between these three tiers in a graphical
fashion, indicating the system components normally hosted on that tier along
with the primary protocol by which it communicates with the other tiers (the ‘???’
label on the connection indicates that there are possibly many different ones
depending on the system).
Servlet Database
JSP Transactions
Database
Applet Transactions
Servlet
Database
Applet JSP
Transactions
All programming models, regardless of the architectural tier, have three distinct
features that are key to developing an application:
► The components that embody application functions
► Control flow mechanisms used to invoke one component from another
Each of these features will be discussed in a separate section with the following
information:
► A basic definition of the component or mechanism
► The role it plays in the architecture
► Some pros and cons as to its usage
► Alternative approaches, if any exist
Application components
Application components are those that a developer will actually have to program,
whether manually or with the aid of tools. The other features of the programming
model represent services that the developer can use when coding an application
component. The language used to develop a given application component will
depend in large part upon the “tier” where the component will be executed at
runtime.
For example, browser-based components will tend to use tag and script-oriented
languages, while Web application server components will tend towards Java.
Enterprise server components may use a variety of languages other than just
Java, such as C, C++, COBOL and the like, so we will focus on the distributed
object server, which tends towards Java as the language of choice.
Because the language differences tend to divide along tier boundaries, we will
divide this section into three separate subsections as we describe the
components you develop that are hosted by browsers, Web application servers,
and distributed object servers.
HTML
HyperText Markup Language (HTML) is the basic “programming language” of the
browser. With HTML, you can direct the browser to display text, lists, tables,
forms, images, and just about everything else you can think of.
The reason that this distinction is important is that static HTML pages do not
require that the content be generated by programmatic means, such as Web
application components hosted within WebSphere (servlets and JSPs). These
components will be discussed in the next section.
Static HTML Web pages are not Static HTML cannot be customized on the
generated by Web application fly based on customer preferences or
components, such as servlets and JSPs. application events. Even pages that may
Their static nature means that they can be seem to be “naturally” static, such as the
cached by either the browser or proxy Customer Home, might actually benefit
servers. from being generated dynamically. For
example, you might limit the functions that
On the development side, they can be
a Customer sees based on the class of
created and maintained with a WYSIWYG
service for which they are registered.
(what-you-see-is-what-you-get) editor.
Alternatives
As mentioned above, the “programming language” of the browser is mainly HTML (with
DHTML and JavaScript being the primary exception as described next). However, an
XML-enabled browser can be used to generate the HTML on the client side.
Finally, you should consider creating dynamic components for every “top level”
(non-dialog state), even if it appears to be static. This approach not only makes it easier
to add dynamic content later, but also makes it easier to compose into other pages.
Each object in the DOM has a set of associated attributes and events, depending
on the type of object. For example, most objects have attributes describing their
background and foreground colors, default font, and whether they are visible or
not. Most have an event that is triggered when the object is loaded into the DOM
or displayed. An object, such as a button, has attributes that describe the label
and events that fire when it has been pressed.
Events are special because they can be associated with a program that executes
when the event is triggered. One language that can be used for the program is
JavaScript, which is a scripting language with Java-like syntax. JavaScript can
be used to change the attributes of objects in the DOM, thereby providing limited
control of the application flow by the browser.
In the middle ground are more complex syntactic validations that involve multiple
fields or begin to incorporate business process policies. For example, is the start
date less than the end date? Does the date requested fall on a weekend or
holiday? There are arguments both for and against handling complex syntactic
validations on the client side. The most forceful arguments against are that it
introduces extra complexity and redundancy in the DHTML, and can cause a
maintenance problem as policies change.
Pros Cons
Hopefully, the benefit Using DHTML/JavaScript for application control flow, whether
of using DHTML and it is on the client or server side, requires programming skills
JavaScript in these and are more complicated to develop and test. You cannot use
scenarios is obvious: WYSIWYG editors for the code.
one or more round
There are differences among the browsers in the details of the
trips to the Web
functions supported. To avoid a browser dependency for the
application server are
Web application, programmers are forced to either stay with a
eliminated, making
common subset of functions or add branching logic and
the application both
optimize for each browser.
more efficient and
more usable (mainly When syntactic validations (either simple or complex) are
because the handled in DHTML and JavaScript, you still have to revalidate
response time is on the server side for each request just in case the client
much snappier). circumvents the input forms. This leads to redundancy of code
on the client and server.
Alternatives
Really, there is no good alternative to DHTML and JavaScript for handling confirmations,
validations, menus, and lists. The complexity for the HTML developer can be managed
somewhat by having a separate programming group develop a set of common functions
that encapsulate the differences between the browsers and have every page designer
include this set of functions within their HTML.
The main difference between framesets and named windows is that framesets
tile the various frames within a single browser window, while named windows
have a separate window for each unique name. Frames in a frameset can have
various attributes that define behaviors such as whether they are resizable,
scrolling, or have borders. Separate named windows can be cascaded or
manually tiled by the user as they see fit.
Figure 1-4 shows a stylized view of how this page might look using framesets.
Action Result
Figure 1-4 Stylized view of online buying application frameset
Although not explored in any more detail here, a frameset makes it easy to
mingle Web publishing and business applications together. In this approach, you
provide visual interest such as images, advertisements, news, and such in the
“surrounding” frames, and keep the frames associated with the business of the
application clean, simple, and most importantly fast (because they can be mostly
text based).
Alternatives
Before we abandon framesets because of the disadvantages mentioned above, there
are some workarounds to consider:
Printing: develop explicit print functions.
Bookmarking: maintain the last page in a database.
Backward/forward: disable the back and forward buttons on the browser.
Browser support: named windows instead of framesets.
If these workarounds cannot be used in your Web application, the only real alternative
to framesets is to compose the pages representing the individual states, and pay the
cost of rerendering the entire page on every request.
Finally, more and more browsers are becoming XML enabled. XML-enabled
browsers can handle XML documents returned from the Web server in response
to a request. The XML document can refer to an associated stylesheet coded in
XSL. The stylesheet is used by the browser to map the XML tags to the HTML
that is ultimately displayed. If no stylesheet is specified, the browser will use a
default format that takes advantage of the tag names.
Pros Cons
One advantage of using XML rather than HTML is The main disadvantage is that
that the stylesheet can be modified to change the XML-enabled browsers are not
look and feel without having to change the Web yet available every where,
application components (described later) that although they are rapidly
generate the data. becoming so.
Another advantage is that the size of the result will Another disadvantage is that
be smaller than the resulting HTML in many cases. XSL-based stylesheets can be
quite complex to code and
Yet another advantage is that the same XML
difficult to debug. WYSIWYG
document may be usable in other contexts than a
editors for XML/XSL are not yet
Web browser, making it possible to reuse the Web
widely available either.
application components.
Alternatives
One alternative is to have the Web application components check the browser type and
either generate HTML for non-XML-enabled browsers or return the raw XML for
XML-enabled browsers. The next subsection will discuss this idea further.
Of course, the focus of this section is the WebSphere Application Server used to
serve up dynamic pages.
Servlets
For purposes of understanding the programming model, you develop servlets to
encapsulate Web application flow of control logic on the server side (when it
cannot be handled by DHTML on the client side).
The HttpServlet Java class from which you will inherit (extend) has a number of
methods that you can implement that are invoked at specific points in the life
cycle. The most important ones are:
► init, executed once when the HttpServlet is loaded
► service, by default calls doGet or doPost, unless overwritten.
► doGet, executed in response to an HTTP GET request
► doPost, executed in response to an HTTP PUT request
► destroy, executed once when the HttpServlet is unloaded
The service type methods (for example, doGet and doPost) are passed two
parameters: an HttpServletRequest and an HttpServletResponse object, which
are Java classes that encapsulate the differences among various Web servers in
how they expect you to get parameters and generate the resulting HTML page.
At one extreme, there are those that create only one servlet to control the entire
application (or worse, they may only build one servlet, ever). The doGet or doPost
methods use a parameter (or the URI) from the request object to determine the
action to take, given the current state. Possible shortcomings are:
► Unmaintainable, when implemented as a large case statement.
► Redundant with other approaches described next, when implemented by
forwarding to an action-specific servlet or JSPs (you might as well route the
request directly to the appropriate servlet).
► Redundant with the servlet APIs themselves, when implemented by loading
an action-specific functional class (the class invoked has to look just like a
servlet, with request and response objects).
► Security for a given function must be manually coded rather than use per
servlet security provided by the WebSphere administration tools.
At the other extreme of the granularity spectrum is one servlet per action. This is
a better approach than a single servlet per application, because you can assign
different servlets to different developers without fear that they will step on each
other’s toes. However, there are some minor issues with this approach as well:
► Servlet names can get really long to insure uniqueness in the application.
► It is more difficult to take advantage of commonality between related actions
without creating auxiliary classes or using inheritance schemes.
In the middle is to develop a single servlet per state in the application flow model
that has dynamic content or actions. This approach resolves the issues
associated with the approaches described above. For example, it leads to a
“natural” naming convention for a servlet: StateServlet. The doGet method is
used to gather and display the data for a given state, while the doPost method is
used to handle the transitions out of the state with update side effects.
Ownership can be assigned by state. Further, commonality tends to occur most
often within a given state and service method type.
Pros Cons
Before the Servlet API became available, each Web application A minor
component (usually a CGI program) had to code to a Web disadvantage to
server-specific API. Java servlets are very portable and can be servlets is that
used with the leading Web servers. Also, servlets stay resident they require
once they are initialized and can handle multiple requests. CGIs explicit compiling
generally start a new process for each request. and deployment
into an application
Servlets can be multi-threaded, making them very scalable.The
server.
application server creates a new thread per client.
Because servlets are Java programs, they can be developed with
an IDE, such as VisualAge for Java.
Alternatives
You can develop monolithic servlets that handle both the application flow logic and
generate HTML, or even go to the extreme of handling business process logic as well.
The only advantage of this approach is that the end-to-end path length is shorter.
The problem with monolithic servlets is that the layout cannot be developed with a
WYSIWYG editor, nor can the business logic be reused in other client types, such as
Java applications. Further, it makes it much more difficult to move the application to
alternate output media, such as WAP and WML.
JavaServer Pages, to be discussed next, are considered by some to be a viable
alternative to servlets, because they are functionally equivalent.
There are numerous tags that allow the developer to do such things as import
Java classes, and declare common functions and variables. The most important
ones used by a JSP developer to generate dynamic content are:
► Java code block (<% code %>), usually used to insert logic blocks such as
loops for tables, selection lists, options, and so on
► Expressions (<%= expression %>), usually used to insert substitute variable
values into the HTML.
► Bean tag (<jsp:useBean>), used to get a reference to a JavaBean scoped to
various sources, such as the request, session, or context.
► Property tag (<jsp:getProperty>) is a special-purpose version of the
expression tag that substitutes a specified property from a bean (loaded with
the useBean tag).
There is also a standard tag extension mechanism in JSP that allows the
developer to make up new tags and associate them with code that can either
convert the tag into HTML or control subsequent parsing (depending on the type
of tag created). This feature would allow a developer (or third-party providers) to
build tags that eliminate the need to explicitly code expressions and java code
blocks, making the JSP code look more HTML-like and less Java like. Custom
tags can make it very easy for non-programmers to develop JSPs (those with
Java skills can develop specialized tags to generate tables, option lists, and
such).
Whether extended tags are used or not, we recommend developing JSPs such
that multiple states can be composed within a single page (see “HTML” on
page 11 and “Framesets and named windows” on page 14 for more details on
page composition). This approach actually simplifies the individual JSPs
because they need not worry about setting headers or the <HTML><BODY> and
Pros Cons
One huge advantage of JSPs is that they There are some good reasons not to use
are mostly HTML with a few special tags JSPs to control the application flow:
here and there to fill in the blanks from
► Current JSP tools do not provide IDE
data variables. The standard extension
functions for code blocks.
mechanism allows new tags to be
developed that eliminate the need to use ► A developer should not handle the
the Java escape tags at all. application control flow and the layout.
Further, JSPs require none of the “println” ► Combining application flow and layout
syntax required in an equivalent servlet. in a JSP makes it difficult to migrate to
This tag-oriented focus makes them another output media.
relatively easy to WYSIWYG edit with ► All HTML tags are compiled into the
tools such as WebSphere Studio Page servlets service method. This makes
Designer. This focus also makes it easier inheritance of common look-and-feel
to assign the task of building JSPs to behaviors in JSPs very difficult.
developers more skilled in graphic design
There are some minor issues associated
than programming.
with using JSPs.
JSPs can be used to provide meaningful
► JSPs compile on the first invocation,
error indicators on the same page as the
which usually causes a noticeable
input fields, including specific messages
response time delay.
and highlighting. Static HTML does not
provide this capability. ► Communication between the JSP and
servlet creates a name, type and data
Another advantage is that JSPs do not flow source convention issue. In other
require an explicit compile step, making words, how do you pass data
them easy to develop and test in rapid elements between a servlet and the
prototyping cycles. This feature tempts corresponding JSP? The next section
some developers to use JSPs instead of discusses using a JavaBean to
servlets to handle the data gathering and encapsulate the data needed by a
update-transition functions, logic that is JSP.
traditionally associated with the controller
component of an MVC architecture.
Alternatives
XML provides a viable alternative to JSP in some situations. It is possible to have the
servlet for a given state return XML directly to an XML-enabled browser, using an XML
parser-generator. Even if a user’s browser does not support XML, the servlet could use
the associated stylesheet to generate the corresponding HTML without using a JSP. We
will discuss this possibility further in the next section, where JavaBeans can be
employed to simplify this process.
This approach allows you to take advantage of the quick prototyping capability of
JSPs early in the development cycle (no compile or deploy step needed). Later
on you could convert the “servlet” JSP to a real servlet (to avoid the need to
precompile the JSPs as described above).
However, we should say here that such tools as VisualAge for Java Enterprise
Edition with its embedded WebSphere Test Environment provide the ability to
rapidly develop and test servlets as easily as JSPs, minimizing the development
cycle-time advantage described above that might motivate the use of JSPs for
application flow control.
A data structure JavaBean is usually nothing but a simple set of properties, with
no need for events or methods (beyond gets and sets of the associated
properties).
Data structure JavaBeans are sometimes made “immutable”. That is, all
properties are private and only get methods are provided to prevent the data
from being updated. Also, data structure JavaBeans sometimes are associated
with a separate key subcomponent that encapsulates those properties that
uniquely identify the associated data.
Immutable or not, key or not, a data structure JavaBean should implement the
serializable interface that enables it to be passed remotely and stored in various
files and databases. An implication of being serializable is that the object
properties must be simple types or strings, or that any contained objects must be
serializable.
Note: Data structure beans are also called cargo beans, value beans, and
other names.
For purposes of the Web application server tier, we also see them used to
maintain the data passed between the servlet and other middle-tier components,
especially JSPs (described in “JavaServer Pages” on page 21) when there is
more than one property involved. They may represent data from the model as it
is transformed for a specific view associated with a JSP, or as occurs in many
cases, it may be that the model object does not need transforming and can be
passed to the JSP as is.
Some developers build a data structure JavaBean for every JSP whether it has
more than one property or not, and whether or it is associated with a servlet or
not. They may also make these data structure JavaBeans immutable, as
described above, to make them easier to deal with in WYSIWYG editors (only
get methods would show in the palette of functions available).
Tip: Consider the usa of view beans that provide an interface between data
structure beans and JSPs. View beans will format the values contained in data
beans into strings that are easily accessible by JSPs.
Some XML enthusiasts propose XML as a dynamic substitute for explicitly coded
JavaBeans (see “XML, DTD and XSL” on page 16). With this approach, a single
XML string is passed or stored rather than a data structure JavaBean. The
receiving component then uses the XML parser to retrieve the data.
While we are strong proponents of XML, and see its merits as a possible
serialized format of a data structure JavaBean, we would not recommend using
XML-encoded strings as a substitute, especially in situations where the data
structure is known at design time.
The extra overhead of generating and parsing the XML strings, plus the storing,
retrieving and transmitting of all the extra tags, makes them very expensive with
respect to the equivalent data structure JavaBean.
The data structure JavaBean represents a formal contract There are no serious
between the servlet and JSP developer roles involved. disadvantages to using
Adding properties is easy because the servlets already data structure
create, populate, and pass the associated JavaBeans, while JavaBeans.
the JSPs already use the bean and property tags. You can
The only issue is that
independently modify the programs to use the new
they can be rather
properties. Also, the new properties can be optional with a
expensive to create,
default assigned as part of the constructor.
and may cause extra
Removing a property from the contract without modifying the garbage collection
associated servlets and JSPs that use them will cause errors cycles as memory is
to be caught at compile rather than runtime. used.
It allows the servlet developer for a given state to focus To circumvent this
entirely on Java to control the application control and data problem, some
flow, while the JSP developer can focus entirely on HTML or developers use pooling
XML-like tags that control the layout. techniques, where a
number of
Many tools are available that take advantage of JavaBean
pre-constructed
introspection for such varied functions as providing
JavaBeans wait to be
command completion and selection of properties from a
requested, used, and
palette at development time, to populating from and
then released back to
generating XML at runtime.
the pool.
Setting properties into a data structure JavaBean, and then
Data structure beans
setting the whole data structure into a data flow source (such
can be tedious to
as HttpServletRequest attributes to be discussed in
develop, although in
“HttpServletRequest attributes” on page 49) is much more
many instances tools
efficient than setting individual properties into that source
will generate the data
one at a time. And getting a single data structure JavaBean
structure beans.
from that same source, and then getting its properties locally,
is much more efficient than getting multiple properties
directly from the source.
The same data structure JavaBeans are likely to be used as
contracts with other components because of their simplicity,
providing a high degree of reuse.
Alternatives
There is honestly no good alternative to using data structure JavaBeans as the formal
contract between components in the architecture. And, as we will see in the following
sections, data structure JavaBeans are used just about everywhere, making them well
worth the investment.
A business logic access bean is a Java class whose methods encapsulate a unit
of work needed by any type of application, be it for the Web or a distributed
client/server. In other words, a business logic access bean is intended to be user
interface independent.
In our sample application we mapped each use case to a business logic access
bean.
The other primary purpose of the business logic access bean is to insulate the
client from various technology dependencies that may be required to implement
the business logic.
Business logic access beans will almost always make use of data structure
JavaBeans and associated keys in the input and output parameters. Further, any
data cached within an access bean is likely to be in terms of data structure
JavaBeans and associated keys, so the two concepts go hand in hand.
You can sometimes simulate a stateful access bean with a stateless one by
including extra parameters that either identify the client or contain the current
state data.
The identity is used in the first approach to look up state data cached in the
access bean. If the second approach is used, the current state data is used in the
called method and a new current state is returned to the client (as a data
structure JavaBean, as described in “Data structure JavaBeans (data beans)” on
page 23) to keep until the next call.
Pros Cons
Alternatives
That said, there is an alternative to wrapping business logic into “vanilla” Java classes,
and that is to directly use Enterprise JavaBeans in the client code. In the next section,
we discuss the various types and applications in the next section that we feel make this
a viable alternative.
There are two main types of EJBs that can be developed as part of the
programming model: session and entity. In a nutshell, session EJBs are those
that have a short life cycle that lasts only as long as both the client and server
maintain a reference to the session. This reference can be lost if the client
removes the session, or if the server goes down or the session “times out”.
An entity EJB is one that once created (either explicitly or implicitly) can be
subsequently found with a “key” for use by a client application. It persists until the
remove operation is invoked by a client application or until the data is explicitly
removed from the underlying store.
Within a session EJB there are three implementation types: stateless, stateful,
and stateful with session synchronization. Within an entity EJB there are two
implementation types depending on how persistence is managed:
container-managed persistence (CMP), and bean-managed persistence (BMP).
The effect of being stateless is that any active instance of a stateless session
EJB can service a request from any client in any sequence. This feature makes
stateless session EJBs the most scalable.
You could use the stateless session EJBs directly by the client program, or have
the business logic access bean wrapper call the stateless session bean.
The implementations of the stateless session EJBs can be exactly the same as
those provided for the business logic access beans, or they can take advantage
of the features of stateless session EJBs.
For example, if the code manually manages a connection pool for a relatively
expensive resource, you can cache the connection in the stateless session EJB
(as long as it is not client specific). This approach effectively lets the EJB
container act as the pooling mechanism, and makes getting the connection
transparent to the business logic, which can simply use the connection.
Tip: Rather than look up the home in the JNDI context, narrow it, and create
the session over and over again for each request, you can create the session
once and cache it in the client (either the servlet or, preferably, the business
logic access bean). This approach should be considered a “best practice”
even though the IBM implementation of the JNDI context in WebSphere
Application Server automatically caches homes to provide a high degree of
scalability.
Alternatives
If, for example, all the logic is handled by back-end CICS transactions, or all the data is
maintained in a single DB2 database using precompiled SQLJ queries, then a simple
business logic access bean that directly accesses these back-end systems may be the
preferred approach.
Unlike stateless session EJBs, stateful session beans can support a custom
create that takes parameters useful in initializing the state. This feature can be
very useful in simplifying the other method signatures, because they can assume
that the state of the session EJB includes those parameters useful for the lifetime
of the session.
Pros Cons
One benefit of using stateful session EJBs is The primary disadvantage to using
that the methods map more closely to the stateful session EJBs is that there
transitions associated with the business are very few quality of service
process model than those of the stateless guarantees with respect to the ACID
session EJB (or business logic access bean) properties you might expect when
described previously. Also, the fewer number of working with components. The
parameters means that there is less data to container is not obligated to provide
marshal and demarshal in a remote method for failover of stateful sessions by
invocation. backing up the nontransient instance
variables in a shared file or
Another benefit of a stateful session EJB is that
database; so in general, if the server
it can reduce the number of calls to the back
hosting the stateless session EJB
end by caching frequently used data as part of
goes down, the state is lost. Further,
its state.
the session may time out due to
Taking this idea to an extreme, stateful session inactivity and the state is lost.
EJBs can cache data considered to be
Another disadvantage related to the
work-in-progress, eliminating all calls to the
quality of service guaranteed for
back end until specific “checkpoint” type
stateful session EJBs is that the
transitions. This can be especially
container does not roll back the state
advantageous in situations where application
if the overall transaction fails.
events may terminate the processing before its
logical conclusion.
Alternatives
There are alternatives to using stateful session EJBs. For example, any of the
approaches for converting a stateful to stateless access bean described in “Business
logic access beans” on page 26 can be used. These same approaches could be used
to convert a stateful session EJB into a stateless one, especially in situations where the
data is stable and read only, or if client/server affinity is already being used.
In either of these cases, a singleton memory cache can be shared by all instances of a
stateless session EJB within the same JVM to maintain data. It is also possible to cache
this data in the client or Web application server (see , “Data flow sources” on page 45
for details).
Another alternative to stateful session EJBs when failover and ACID properties are
required is to use an entity EJB (discussed in detail below). In this case, the “pseudo
session” life cycle would be explicitly managed by the application, but its state data
would be immune to timeout as well as server and transaction failures.
The effect is that the same session EJB can be called one or more times in the
context of a single transaction, and the container (in conjunction with the
transaction controller) manages the calls required to close out the transaction
without an explicit call from the business logic methods.
The business logic methods can throw a system exception or set a flag to cause
a rollback; they can throw application exceptions or exit normally to cause a
commit.
The nice thing about session Except for the simple cases the session
synchronization is that the business synchronization interface can be very difficult
logic of the session no longer has to code to implement, especially if the underlying
be concerned with managing resource does not provide support.
transactions and cached state.
Also, the code to manage transactions must
Instead, business logic methods
apply to all methods on the session that require
need only throw an exception when
a transaction. For example, there is no way to
an error occurs to cause a rollback,
process the backup/restore differently based on
or return successfully to cause a
the method(s) invoked without involving the
commit. In either case, the
methods themselves. In this case, it may be
associated state is properly
best to handle the compensation in the methods
managed. If the code needs to cause
themselves.
a rollback without throwing an
exception (say for read-only Implementing the session synchronization
methods), it can explicitly invoke a interface cannot be considered to support true
setRollbackOnly on its EJB two-phase commit. The reason is that the
transaction retrieved from the transaction coordinator is not obligated to
context. resurrect the session and complete the
transaction if there is a failure between phases.
In cases where the session EJB was
The net effect is that there is a window of
originally stateless and only added
opportunity where resources can become out of
session synchronization (and state)
synch.
to hold a transaction, then failover
and timeout is definitely not an issue, Finally, session synchronization is relatively
because the client (HttpSession or expensive to achieve at runtime, because it
business logic access bean) will adds an additional set of methods that must be
create one as needed anyway. called to manage a transaction. There should
never be more than one or two per unit of work
(either of our designs above have only one).
Alternatives
If a stateful session EJB is being converted to use session synchronization simply to
provide transactional semantics of the cached data, then consider using a CMP entity
EJB. The advantage would be transparent transactional semantics on the persistent
properties.
In other cases, the best alternative is to defer session synchronization implementation
to the deployer role and have the business logic developer code the session methods
to be as independent of transactional semantics as possible. This alternative takes
session synchronization out of the “normal” programming model and makes it a
deployment responsibility.
An entity has a set of properties, including those that make up the key, which are
considered to be part of its persistent state. The associated business logic
methods operate upon these properties without regard to how they are loaded
and stored.
In a CMP entity EJB, the container manages the persistent properties. When
bean-managed persistence (BMP) is specified, the developer explicitly codes
well-defined methods invoked by the container to manage the persistent
properties.
As with all EJBs, care must be taken to minimize the interactions between the
client and server, even if the two will be co-deployed (as when the client is a
session EJB). For entity EJBs, we recommend the use of the following
approaches:
► Custom creates. These are designed to create the object and initialize its
properties in a single call, rather than the default create that takes just the key
properties followed by individual sets (or a call to a copy helper method as
described below).
► Custom finders. These are designed to return a subset of the entity EJBs
associated with the underlying data store, usually by passing in various
properties that are used to form a query.
► Copy helpers. These are get and set methods that use data structure
JavaBeans to return or pass a number of properties at once.
► Custom updates. These are designed to do some update function and return
a result in a single call.
Where entity EJBs are used, you will usually end up with the following:
<Entity>Key: A data structure JavaBean that holds the key properties
<Entity>Data: A data structure JavaBean that holds both key and data
properties of the entity. Some go as far as to create a
<Entity>DataOnly that holds only the non-key properties to
minimize the marshalling overhead for the gets and sets.
<Entity>Home: The home interface for finding/creating the EJB, usually with the
following methods:
<Entity> create(<Entity>Data):
creates a new entity and initializes all the properties
<Entity> findByPrimaryKey(<Entity>Key):
finds based on the key
Of course, there are numerous approaches that can be used. For example, many
like to include methods that have individual properties passed in rather than
forcing the use of a data structure JavaBean.
Also, many will add methods on the entity EJBs to aid in navigation across
associations between objects. Of course, the implementations of these
navigation methods ultimately use the custom finders described above.
The primary benefit of CMP entity EJBs is As with all EJBs, the downside to CMP
that persistence and transactions are entities shows how having a rich set of
completely transparent to the business object services can be a double-edged
logic methods. When we used session sword: the overhead associated with
EJBs, the only way to get similar managing distribution, security, and
functionality was to implement the session transactions can be very expensive. CMP
synchronization interface and use the entity EJBs require the developer to trust
methods to load or store the state from a the container implementation to provide
backing store. persistence in an efficient manner.
This advantage is key from an Currently, there are numerous deployment
evolutionary perspective. Let’s say our choices available within WebSphere
early iterations used the Persistence Application Server for entity EJBs. While
Builder behind the business object access this is not a problem for the programming
beans and thus required session model, and should be considered to be an
synchronization in the stateless session advantage, it does complicate the
EJB associated with the Entry business decision whether or not to use entity EJBs
logic access bean. Later, we migrate the in the first place.
business object access beans to use
At the same time that there are a large
entity EJBs. Once all the access beans
number of choices, there are never
are converted, we could reimplement the
enough. Some would like CMP containers
stateless session bean to drop session
for CICS VSAM files, or IMS DL/I. Others
synchronization without having to touch
are fine with relational databases, but
the business logic. The transaction started
would like even more bells and whistles,
by the stateless session bean propagates
such as preloading of related objects.
through to each entity so that any changes
are all or nothing.
Alternatives
There are at least three alternatives to CMP entities when our current container
implementations do not seem to meet your requirements:
► Client access beans. This option may make sense if you cannot afford the remote
method call overhead associated with EJBs.
► Session EJBs. This option may make sense if you need a thin client tier or must
isolate the business logic from the client for integrity or load-balancing purposes.
► BMP entity EJBs. This option may make sense if having a simplified programming
model for the business logic is the biggest requirement, but you have database
requirements not met by our current container implementations.
The first two options have already been discussed in detail in this section. All three
options can be used together effectively: business logic access beans passing through
to session EJBs, which use business object access beans passing through to BMP
entity EJBs. We will discuss BMP entities next.
In short, the ability to develop BMP methods expands the applicability of entity
EJBs to situations where tighter control of the underlying data store is required.
This requirement can occur when WebSphere does not support a legacy
database. It can also occur when performance considerations preclude using the
“vanilla” code generated for CMP entities.
Pros Cons
This approach not only makes the The downside is that the persistence logic
business object logic much simpler to can be relatively complicated to
write, but also much easier to migrate to implement efficiently. For example, in
CMPs later, if the required container custom finders, you almost always need to
options eventually become available. cache the results of the query so that the
Following this approach means that the iterative calls to the ejbLoad for each
BMP method implementations can be instance merely retrieve the data from the
discarded and the entity EJBs can simply cache. In short, it can be very difficult to
be redeployed, without having to change minimize the number of transactions and
either the business logic methods or the back-end accesses.
client code.
Alternatives
The alternatives have already been discussed in the previous section: mainly, directly
accessing the back end in a business logic access bean or session EJB.
As with CMP entity EJBs, it is almost always a better practice to use a session EJB of
some type as a wrapper, hiding the entity from the client. The advantage is that the
session EJB can coordinate the transaction across multiple EJBs.
Like the components themselves, the mechanisms vary by the tier upon which
the source component executes at runtime. We will likewise divide this section
up accordingly and have a subsection devoted to control flow mechanisms that
can be initiated from:
► Browser-based components, such as HTML
► Web application server-based components, such as servlets
We deliberately do not include the enterprise tier, not because there are no
mechanisms by which control flow is affected, but because they are pure Java
method calls.
We will discuss the control flow mechanisms for each of the above in turn.
HTTP GETs
An HTTP GET request can be effected in a number of ways:
► An HREF tag associated with text or an image
► Image maps, that allow specific areas of an image to target a given URL
when clicked
► JavaScript onclick=’location=<URL>’ associated with a visible and clickable
DOM object
► A FORM with ACTION=GET and a submit action invoked either through an
associated INPUT TYPE=SUBMIT button, or a JavaScript submit action
associated with a browser event
Pros Cons
Because there is no side effect involved, using When using HTTP GETs, the ability to
HTTP GETs is the most efficient way to transfer data to the target state is
transfer control from one state to the next, limited to the URL query string (more
especially where the next state is pure HTML on this in the next section), which has
that may be already cached by the browser. definite size limitations (often
dependent on the Web server
Pages invoked with an HTTP GET can be
handling the request). Also, the
easily bookmarked to return to the same page
location includes the data passed,
with the same data where dynamic content is
which can be really distracting.
involved.
Alternatives
There is no good substitute for an HTTP GET to transfer control with no side effects,
because there is no need to involve an “intermediate” Web application component such
as a servlet or JSP. However, you should remember that updating most of the data flow
sources can be considered to be a side effect, which may be best handled by some
other HTTP request type (such as a POST).
Once the link is established, triggering the associated event (such as clicking the
link) will cause the POST request to be issued to the Web server. Usually, POST
requests must be handled by a Web application component, such as a servlet or
JSP.
Pros Cons
One advantage of an HTTP POST However, some browsers display a rather ugly
is that there are no absolute limits message if an HTTP POST request needs to be
to the amount of data that can be reinvoked due to a browser event, telling the user
passed to the Web server as part to reload the page.
of the request. Also, the data
Also, an update side effect is usually expensive,
passed does not appear on the
so HTTP POST requests should be minimized by
location line of the browser.
handling as many confirmations and validations
Another advantage of an HTTP as possible on the client side.
POST is that the browser will warn
Another disadvantage of a POST request is that it
the user if the request needs to be
cannot be bookmarked because the associated
reinvoked (such as through a
data is not available in the URL query string as
resize, back, forward or other
mentioned above (more on this in “Data flow
browser event that needs the page
sources” on page 45).
to be reloaded).
Alternatives
There is really no substitute for an HTTP POST to attempt a transition with an update
side effect. However, some transitions that may seem to have a side effect can actually
be handled with an HTTP GET.
For example, if a source page has a form to gather query parameters, it is possible to
use an HTTP GET to transfer control to the servlet associated with the next state, which
takes the parameters and reads the data to display. The reason that a GET is
reasonable is that the action is read only and the amount of query data is usually
relatively small.
We will briefly explore three mechanisms by which servlets can invoke other Web
application components:
► RequestDispatcher forward
► RequestDispatcher include
► HttpServletResponse sendRedirect
Pros Cons
When the forward call is used, A source component that invokes a target cannot
the target has complete freedom generate any response prior to the forward call. Nor
to generate the response. For can it generate any response after the call returns.
example, it can write headers, or This restriction means you cannot compose pages
forward or include to other Web with forward.
application components as it
A source component that was itself invoked by an
sees fit.
include call (see “RequestDispatcher include” on
This freedom for the target page 43) cannot use the forward call. This
makes programming the source restriction means a source component (one that will
component much simpler: it does transfer control to another) has to know how it is
not need to generate any being used.
headers or set up prior to
The target component must be a Web application
delegating to the forwarded
component, requiring that targets of forward calls
component.
must be converted to JSPs, even if they contain
purely static HTML.
Alternatives
The most viable alternative to forward is for a servlet to set up the headers and enclosing
HTML tags, then use the include mechanism (discussed next). This approach provides
the ability to compose the response from multiple JSP components with as few changes
as possible.
This alternative also simplifies the JSPs involved, because they do not need to generate
headers and enclosing HTML tags.
RequestDispatcher include
The include method on the RequestDispatcher neither opens nor closes the
response, nor does it write any headers, which means that multiple components
can be included in the context of a single request.
Pros Cons
One reason to consider this approach is that the included Included components
components are much simpler to code, because they do not cannot write to the
need to generate the <HTML>, <HEADER>, and <BODY> header or close out the
tags. For JSPs, the calling servlet can handle the code often response. Therefore,
required to prevent caching, simplifying them even further. these actions must be
done by the source
The included components can often be reused in multiple
component.
places. For example, if we were not able to use framesets in
our application due to restrictions on the browser, we could Included components
convert an HTML output to a JSP and compose the pages in cannot be static Web
the servlets. pages (or fragments),
requiring that they be
The components can be included by a superclass
converted to JSPs.
HttpServlet to provide a common look and feel across all
states in the application.
In future versions of WebSphere, included components can
be cached, making it much more efficient to compose pages
from multiple states. The ability to more easily exploit this
feature when it becomes available is another good reason to
consider including components.
Alternatives
When pages need to be composed, there is no really good alternative to include except
to use framesets or named windows (see “Framesets and named windows” on
page 14).
HttpServletResponse sendRedirect
The sendRedirect method is implemented on the HttpServletResponse object
that is passed in on the service methods associated with an HttpServlet. It
generates a special response that is essentially code telling the browser that the
requested URL has temporarily moved to another location (the target URL). No
other response is generated by the source component.
The browser intercepts the response and invokes an HTTP GET request to the
URL returned as part of the response, causing a transition to the next state.
Alternatives
There are no good alternatives to using a sendRedirect after processing requests in
servlets that require update side effects.
Like the first two sections, this section is divided into subsections describing data
sources associated with each of the three tiers:
► Browser
► Web application server
► Enterprise servers
And as with control flow mechanisms, we show how the choice of data source
can have a huge impact on the overall performance and integrity of the
application.
The discussion in this section will address the details of these and other
trade-offs.
Neither the names nor values can have embedded spaces; instead spaces and
other special characters must be encoded.
The values can be retrieved through various methods associated with the
HttpServletRequest object, most notably getParameter, which returns the value
for a given name.
Another use for a URL query string is URL encoding of the session ID for
HttpSession on the Web server (see “HttpSession state” on page 50) instead of
cookies (discussed later in this section).
Pros Cons
The benefit of Encoding the URL query string in sendRedirect calls and
using the query generated HREFs can be quite complicated and only a small
string is that it is amount of data can be passed.
very simple to
The query string is visible on the location line, and can sometimes
retrieve the
be very long and confusing. This visibility in the query string
associated data.
extends to hidden fields in forms (when METHOD=GET)
Alternatives
There is no good substitute for the URL query string to send a few small key values to
the target component. However, where the data is common across most states in the
application flow, it may be better to use cookies or HTTP sessions (both discussed later)
to make the data flow transparent to the programs.
Pros Cons
As with the URL query string, one benefit to using POST data is that it is
easy to retrieve, either by name or iteratively.
However, unlike the URL query string, the main benefit to using POST data
is that there is no absolute limit to the amount of data that can be sent.
Finally, the data passed does not clutter up the URL, so hidden fields remain
hidden to the casual user, and the encoding of the data is transparent to the
source component.
Alternatives
As with the URL query string, there is no good substitute to POST data to provide the
input parameters to actions with update side effects. However, where hidden fields are
used to provide common data across the entire browser session, it may be wise to
consider using cookies or HTTP sessions.
Cookies
Cookies are data maintained on the client by the browser on behalf of the server.
Cookies can be made to persist within or across browser sessions. Cookies are
passed to the Web server in the header of the request. Any updates are passed
back on the header in the response.
Within the servlet API, there are methods that allow you to get and set cookies.
Cookies are automatically Passing cookies back and forth can be relatively
passed in the header, and thus expensive. Further, the amount of data that can be
do not require explicitly coding maintained per server may be limited by the
hidden fields or URL query browser. The effect is that cookies should be used
strings in the HTML and JSPs. sparingly.
This feature of cookies makes
Another problem is that not all browsers or levels of
the application much simpler to
browsers support cookies. Even if they are
develop, test, and maintain.
supported, users can turn cookies off as a security
The ability to maintain persistent or privacy measure, which means that:
cookies means that the client
► Your Web application has to be coded for the
machines can be enlisted to help
case where cookies are not available, and use
share the cost of running the
alternative techniques (discussed below),
application. In an e-business
application with millions of users, ► You must make an explicit decision to support
not having to maintain often used only users with browsers having cookies
preference data for each one can enabled.
be a significant savings in both
space needed to store it and time Also, other HTTP-based clients, such as applets,
needed to retrieve it. may have trouble dealing with cookies, restricting
the servlets that they may invoke.
Alternatives
URL encoding techniques can be used to put the equivalent data in the URL query string
rather than relying on cookies.
All these sources share a characteristic not associated with the other ones: only
a Web application component (servlet or JSP) can store or retrieve data using
them.
We discuss the advantages and disadvantages of each in the context of the role
that source should play in the architecture. We also discuss any alternatives.
The HttpServletRequest interface has methods to set and get the attribute
values by name. You can also retrieve a list (Enumeration) of all the attribute
names currently maintained in the request.
A JSP can use the expression syntax or Java escape tags to get request
attributes using the servlet API, or it can use a bean tag scoped to the request
(the default) with introspection to automatically load attributes whose names
match the bean properties.
Pros Cons
Of all the data sources, Setting too many objects into request attributes can
whether maintained by the cause problems with:
browser, Web application
► The contract between the source and target
server, or enterprise servers,
component developers. For example, what do you
HttpRequestAttributes are
name the attributes? What is their type?
the second most efficient
(behind passing the data ► Performance, because each set is a Hashtable put
directly in parameters of a and each get is a Hashtable lookup.
method or in a shared The HttpServletRequest object does not persist across
variable). calls, so it cannot be used to hold data between states
Because its scope is limited in the application flow model. The net effect is that
to the request, there is no request attributes can be passed only to targets using
need to write logic to “clean forward and include. Request attributes cannot be
up” the data. passed to targets invoked through sendRedirect.
Alternatives
When using forward or include to dispatch to an associated JSP, a controlling servlet
can pass data through HttpSession and ServletContext. When invoking a JSP or servlet
through the sendRedirect, data can be passed using cookies or the URL query string.
HttpSession state (in this section simply session state) is maintained by the Web
application server for the lifetime of the session in what is basically a hash table
of hash tables, the “outer” one keyed by the session ID (the session hash table)
and the “inner” one keyed by the state variable name (the state hash table).
When the session is created, the ID is passed back and forth to the browser
through a cookie (the preferred approach) or URL encoding. Since servlet API
2.2, the scope for sessions is a Web application (not the whole server).
The outer session hash table can be lost if the Web application server goes down
(and the session is not backed up). The inner state hash table can be lost on a
timeout or through explicit application events (the remove method, for example).
A JSP can use the expression syntax or Java escape tags to get session state
using the servlet API, or it can use a useBean tag scoped to session with
introspection to automatically load states whose names match the bean
properties.
In this case it is customary to store some sort of “login” token into the session
state. The session state maintained could be as simple as a customer ID, or it
could be a complex object that includes additional data common to all the states
in the application flow, such as open order.
Session state is rather Session state suffers from the same problems that request
easy to use in the attributes do if you store too many objects in them in the
program (especially if a course of a single request: there is a name and type
data structure JavaBean contract problem with the target component, and a
is stored instead of performance penalty with every additional Hashtable put
individual values). The and lookup.
Web application server
Session state has some additional disadvantages:
manages it at runtime
based on configuration ► Timeout. A session can time out when you least
parameters, making it expect, making it risky to store significant application
easy to tune flow data. Usually you end up explicitly modeling and
non-functional programming “save” and “load” type flows to make the
characteristics such as problem less acute.
failover and performance. ► Server failure. Even if you have an infinitely long
This ease of use makes it timeout (and expect servlets to programmatically
tempting to store some invalidate the session state), the server can fail,
application flow data (the causing the data to be lost. Specifying that a session
current open order for state be backed up in a database gets around this,
example) in the session and provides for failover.
state rather than in a
database that has to be ► Cache consistency. When a session state is used to
explicitly administered. cache back-end data, how do you make sure the
session state is in synch with the data stored in the
When the data is already back-end system. To provide for cache consistency
being stored in the back means adding code to the doGet methods to check the
end, and when accesses key of the data in a session state with that in the
are expensive, the request, and adding code to the doPost methods to
performance gains of remove the affected session states.
using session state to
► Cluster consistency. It is likely that you will want to
cache the data can be
scale the Web site by adding a cluster of WebSphere
significant.
application servers. Even if you add all of the extra
logic to manage cache consistency from the previous
item, you must either force client/server affinity (see
“Stateful session EJBs” on page 32) and lose failover
support, or back the session up in a shared database
and impact performance.
Of course, the memory resources required for session
state should be taken into consideration. Indiscriminate
use of HttpSession can use up vast amounts of data. For
example, if there were 1000 active user sessions each
needing to maintain a megabyte of data, your application
would use up a gigabyte of memory for the session state
alone.
Like request attributes and session state, the servlet context also maintains an
object that is the equivalent of a hash table, providing methods to get and set
attributes by name as well as list the names stored within.
Unlike request attributes, which are scoped to a request, and session state,
which is scoped to a session, servlet context is scoped by a Web application.
The current specification explicitly states that sharing of servlet context in a
cluster is unsupported. Note that session data is scoped by the Web application
as well since servlet API 2.2.
If used for either purpose, we would likely set attributes into the servlet context as
part of the init method, which would allow all servlets to use the data.
Pros Cons
Proper use of servlet context can Also as with session state (and request
greatly reduce both the amount of attributes), you should minimize the number of
session state data and the number of attributes stored, and make sure that there is a
back-end accesses required to load systematic name and type convention in place.
it.
Unlike HttpSession, the specification prohibits
As with session state, servlet context sharing of servlet context in a cluster, primarily
is very easy to deal with, and can to force its use as a true cache. This limitation
eliminate the need to explicitly model is not really a disadvantage when servlet
extra business objects. context is used as a cache for stable read-only
data, because each application server will
Because servlet context attributes
perform better having its own copy of the data
cannot be shared in a cluster, there is
in memory.
no requirement that data stored
therein be serializable. This allows If for some reason there is a requirement to
servlet context to be used to store store common data, yet allow updates to it,
very complex objects, such as access then client/server affinity must be used to
beans (preferred) or EJB references. prevent cluster consistency issues. Of course,
this means that the updates have to be
Also, storing singleton references in a
associated with a specific user. Also, because
servlet context can prevent them from
the servlet context is shared by the entire Web
being garbage collected, because the
application, you have to be careful to manage
reference is maintained for the life of
the code carefully, since multiple servlet
the Web application server.
threads could be accessing the same
attributes simultaneously.
Alternatives
Where servlet context is being used to store data from the back end to avoid extraneous
accesses (a caching pattern), an alternative is to delegate caching the data to the
business logic access bean.
Where the default servlet context is accessed (the parameterless version of the API),
then a viable alternative is to use the singleton pattern.
These alternatives do not supersede the advantages of storing business logic access
beans or connection objects in a servlet context to hold a reference and prevent garbage
collection.
What separates these data sources from the others is that they can be used
outside the context of a Web application server.
The primary difference is that the JNDI name context is managed by a distributed
name server, which allows the names and values to be shared across requests,
sessions, application servers, and a cluster.
In a Web application, JNDI would be used by business logic and business object
access beans to get access to the home of EJBs.
The benefit to using JNDI is that JNDI accesses are relatively expensive even with
is designed for storing small to the automated caching support provided by
medium amounts of relatively WebSphere Application Server. Therefore, calls to
stable data per name, without them should be limited using the techniques
requiring the involvement of a discussed in “Stateless session EJBs” on page 29.
database administrator to create This approach will make it easier to port to
and maintain a new table. competitive products without having to worry about
their implementation.
The fact that JNDI is
distributable, sharable, and Updates are even more expensive, so only
persistable makes it applicable in relatively stable data should be stored in JNDI
Web application scenarios where name contexts. The pattern is write once, read
the other data flow sources many. For example, user preference data fits into
cannot be used. this category, but customer data, with its reference
to the currently open order, does not.
Alternatives
You can always explicitly model the data stored in JNDI as a business object and use
either JDBC or EJBs (preferably behind an access bean).
JDBC
JDBC provides a Java interface to relational databases, allowing dynamic SQL
statements to be created, prepared, and executed against pooled database
connections.
Any database that supports relational semantics can be wrapped with the JDBC
interfaces and provide a “driver” for use in the client application or creating a data
source.
An example of when we might use JDBC is in loading a product catalog into the
cache (distributed object overhead may be considered to be excessive for the
benefits achieved).
JDBC provides all the JDBC client code can be rather complicated to develop
benefits of relational properly. Minimizing the number of statements executed in
databases to Java the course of a unit of work is key.
applications in an
Also, explicitly managing the transaction context can be
implementation-indepe
complicated. If auto commit is turned off, care must be taken
ndent manner.
in the program code to commit or rollback the transaction as
Directly using JDBC in appropriate. If auto commit is left on, care must be taken
a client application will when there are multiple statements in a single unit of work:
likely provide the most each statement is a separate transaction, which can cause
efficient implementation significant extra overhead and complicate error handling
of the application, logic.
especially if connection
Directly using JDBC locks your application into relational
pooling of data sources
technology, although wrapping it within a business object
is used.
access bean can help insulate the client application code,
and make it easier to migrate later.
Even if wrappers are used, JDBC requires that a JDBC driver
be installed on the application server, potentially making it a
“thicker” client that it would be if EJBs were used.
Alternatives
The best standards-based alternative to JDBC is to use EJBs, which makes persistency
transparent to the business object programming model, and allows the client to be
“thinner”.
Of course, you can use non-standard connector-based technology such as CICS, MQ,
and IMS. But whether behind wrappers or not, these connectors make the client even
thicker by requiring additional software to be installed.
Table 1-5, Table 1-6, and Table 1-7 summarize the details of the components,
control flow mechanisms and data flow sources.
Control flow HTTP (GET & POST) Java (forward, Java (RMI/IIOP)
mechanism include,
sendRedirect)
Framesets and Browser Groups related states on a single page to allow for
named windows smaller, more parallel requests and minimize need
for explicit navigations
XML, DTD, and Browser Allows request results to consist of data only and
XSL provide client control of display format
BMP entity EJB Enterprise Methods added at deployment time to allow entity
Java server EJBs to control quality of persistence service
HTTP GET HTML or Any URL Directly invoke the target URL
DHTML associated with the next state,
invoking a servlet or JSP for
dynamic content
HTTP POST HTML FORM Servlet Invokes the target servlet indicated
in the ACTION to handle update
side effects
URL query Browser HTTP GET Pass small amounts of “key” data
string used to drive queries in doGet of the
sendRedirect
servlet associated with the target
state
POST data Browser HTTP POST Pass input data used to drive updates
in the doPost of the servlet
associated with the current state
If you develop your applications according to these principles, you will have an
application that is not only functional, efficient, maintainable and portable, but
also is able to exploit the deployment options best suited to your operational
environment. Many of these options are discussed in more detail in the
remaining chapters of this book.
For each tool we include a brief description, links to further information, and
provide references to the chapters where we use and describe the tool.
The key new features in Version 4.0 of WAS that are of particular interest to
developers include:
► Support for all Java 2 Enterprise Edition (J2EE) 1.2 APIs, including:
– Java Development Kit (JDK) 1.3
– Java Servlet 2.2
– Java Server Pages (JSP) 1.1
– Enterprise Java Beans (EJB) 1.1
– Java Message Service (JMS) 1.0.2
– Java Database Connectivity (JDBC) 2.1
► Incorporates support for Web services:
– Simple Object Access Protocol (SOAP)
– Universal Description, Discovery and Integration (UDDI)
– Extensible Markup Language (XML)
– Web Services Definition Language (WSDL)
► New tools to support J2EE application development
► New lightweight single server and developer editions
► Improved performance, including:
– Improved scalability on SMP machines
– Configurable caching of dynamic web content
– Improved plug-in performance using built-in web server
– Improved HTTP session clustering
New tools
WAS provides a number of new tools to assist in the development and
management of applications.
The AEs version of WAS is also available with a restricted license that limits its
use to development environments only. This version is included with a number of
IBM tools and is also available for free download from the IBM WebSphere Web
site.
Improved performance
Version 4 of WAS introduces a number of enhancements in the area of
performance. Of particular relevance to developers are the improved scalability
on multi-processor (SMP) machines, and the ability to cache dynamic Web
content.
The ability to cache dynamic content produced by JSPs and servlets may also
influence your application implementation. For each JSP and servlet in your
application WebSphere now offers the ability to specify whether the output from
the component is to be cached, and if so, how long it may be cached.
The new version of VisualAge for Java includes support to assist developers in
writing code for and deploying code to WebSphere Application Server Version
4.0.
The WebSphere Test Environment (WTE) included in the new version uses
Version 3.5.3 of the WebSphere runtime, which includes support for Version 2.2
of the Servlet API and Version 1.1 of the JSP specification—these are the
versions supported by Version 4.0 of WAS.
Version 4.0 of VisualAge for Java also includes a new menu option that enables
EJBs developed using the built-in EJB development environment to be exported
as EJB 1.1 JAR archives. These archives include the XML deployment descriptor
and database schema and mapping information.
This is a new command-line tool that replaces the EJBDeploy tool shipped with
WAS AEs. The new tool will be shipped with WAS AE.
Rational Rose
Rose is a model-driven development tool from Rational that allows developers to
model their applications using the Unified Modeling Language (UML).
Information about Rose can be obtained from the Rational Web site at:
http://www.rational.com/products/rose/
In developing this book we used the version of Rose included with the Rational
Suite Enterprise Version 2001A.04.00.
Rational ClearCase
ClearCase is a software configuration management (SCM) tool developed by
Rational. Information about ClearCase can be obtained from the Rational Web
site at:
http://www.rational.com/products/clearcase/
Jakarta Log4J
Log4J is an open source Java logging framework developed as part of the
Apache Jakarta project. Information about Log4J can be obtained from the
Log4J Web site at:
http://jakarta.apache.org/log4j/
Jakarta Struts
Struts is an open source framework for developing Web applications using the
Java Servlet API. It is a subproject of the Apache Jakarta project. Information
about Struts can be obtained from the Struts Web site at:
http://jakarta.apache.org/struts/
JUnit
JUnit is an open source framework that can be used to develop and execute
automate unit tests against Java code. Information about JUnit can be obtained
from the JUnit Web site at:
http://www.junit.org/
JUnit is discussed in more detail in Chapter 18, “Automating unit testing using
JUnit” on page 517.
The source code for the multiple versions of the application and other supporting
files such as Ant build scripts and Rose models are included in the Web material
that supports this book and is described in Appendix A, “Additional material” on
page 557.
Our application has much in common with a piggy bank—it is little more than a
toy, although we hope that you can learn from it.
Functional overview
The PiggyBank application manages accounts and customer records for our
fictitious bank. The application has two separate user interfaces:
► A Swing-based GUI that runs in a standalone Java application
► An HTML-based Web interface that runs in a client browser
Both interfaces access the same back-end system, and operate upon the same
data. We often refer to these interfaces as channels because they implement just
two of potentially many different channels of communication with our application.
Web client
The PiggyBank Web client is intended to be rolled out to PiggyBank customers. It
offers a much reduced set of functionality:
► Log on
► Display account details
► Transfer money
► Log out
Security functionality
Typically security is a primary consideration in a banking application. The
PiggyBank application is unusual in this respect—there is no security
functionality implemented at all. All users of the standalone client have full
access to all customer and account information. Web clients need only enter a
valid customer ID to log in—there is a password field on the log on form but it is
ignored.
We have omitted security from the application in order to allow us to focus on the
real message of this book—how to design, build and deliver well-structured and
serviceable J2EE applications for WebSphere Application Server.
Servlets &
JSPs
Standalone IIOP
Client Database
Both client channels communicate with the EJBs using RMI over IIOP—the
standalone client communicates with the EJBs directly, whereas the Web client
uses HTTP to connect to servlets that make RMI calls on behalf of the client, and
display the results using Java ServerPages (JSPs).
The only logic implemented locally in the clients is basic validation and
conversation management specific to the channel. The application implements
the model-view-controller (MVC) architecture as described in
“Model-view-controller pattern” on page 87—each client channel implements its
own view and controller, but shares the same model.
Web Standalone
client client
Use cases
EJBs
Common code
Figure 3-4 PiggyBank application modules
One fundamental characteristic that applies to all of our common code is that is
has no dependency on any other application code—that is to say it can be
compiled in isolation without reference to any other module. Dependencies on
external modules, such as a third party logging library, are allowed.
EJBs
The EJB code implements all of the PiggyBank persistence and business logic.
The EJB module provides services to clients via a session bean façade—the
internal implementation of the business logic hidden behind the session bean
interface is not intended for use by clients of the module (although this policy is
not enforced). This arrangement is illustrated in Figure 3-5.
EJB
client
Use cases
The PiggyBank application use case classes fulfil the role of business logic
access beans as described in “Business logic access beans” on page 26. Each
use case class supports a single use case defined during the requirements
analysis described in “Use case analysis” on page 90.
Internally the use case classes are EJB clients—they delegate the processing of
business logic to the EJB layer. In the PiggyBank application they are the only
EJB clients (apart from EJBs such as the session EJBs, of course). This extra
layer gives us a number of advantages:
► Flexibility to modify the business logic implementation without affecting our
client channels
► Removes the need for client channel programmers to understand how to
access the EJBs, or indeed anything about the EJB implementation, including
the fact that there are EJBs involved at all
► Encapsulates the EJB access code into a single place, where it can be
managed and maintained more easily
The use case code depends upon the common code and the EJB code to
compile.
Standalone client
The standalone client application is a simple Swing GUI application. It uses the
use case classes to access the business logic. Results from the use cases are
returned in the form of data only beans—the GUI code extracts information
directly from the data only beans and presents it to the user.
Because the client code is—via the use case code—an EJB client, it must be
packaged as a J2EE application client and execute inside the WebSphere client
container.
Despite this, there are no code dependencies on the EJB layer—the standalone
client requires only the common and use case code to compile.
The Web application commands access the business logic using the use case
layer. The data only beans they receive in reply are packaged in channel-specific
view beans, which are then placed into the HTTP request and passed to a JSP
for display. The view beans are a wrapped around the data beans, providing
JSP-friendly services such as iteration and text formatting.
As with the Swing client, the code that implements the Web channel requires
only the common and use case code at compilation time—there are no
references to classes in other layers.
Application implementation
The PiggyBank application is implemented in a number of Java packages.
Table 3-1 lists the packages included in each module, and describes the contents
of each package.
Table 3-1 PiggyBank modules and packages
Module Package Description
The final chapter in this part of the book discusses how we can use frameworks
in our application design. It introduces two frameworks—Jakarta Struts and
WebSphere Business Components Composer—that we use in later chapters to
implement elements of our example application.
Figure 4-1 shows the trail that is followed in Part 2, “Analysis and design”.
Analysis Design
Modeling
Figure 4-2 Modeling parts
The distinction between analysis and design must be permanently kept in mind:
► Analysis describes what the system does.
► Design describes how the working system will actually perform its task.
This part of the book assumes that the project is already covered by a more
complete OO methodology.
Whether there is only one person or one hundred, the concept of the separation
of roles and responsibilities is key to the successful creation and maintenance of
the e-business application.
The command pattern involves separating the tasks by each role, such as:
► The HTML developer uses a tool like WebSphere Page Designer to generate
HTML pages and JSP templates.
► The Script developer uses a Java programming environment like VisualAge
for Java to edit, test and debug servlets and JSPs.
► The Java business logic developer uses a Java programming environment,
like VisualAge for Java, and builders, like the integrated EJB builder, to
specify the details of the business logic, to access legacy applications and
data, and to build the commands.
Further reading
A good reference for assembling a development team is the book The Rational
Unified Process, An Introduction. Philippe Kruchten.
Development roles
In the book, for the sake of clarity, when we refer to, for instance, the senior
business analyst, we mean “the person who plays the role of senior business
analyst”. It might be the same person who plays the role of say the junior Java
developer.
Web developer Web developers are skilled in the use of HTML and the
tools used to develop and maintain Web content. They
are responsible for delivering HTML and JSP pages
and the various interface elements such as images
contained within them.
Patterns
Most application designs follow certain documented patterns that have been
proven in many successful installations. The original book describing patterns is
Design Patterns: Elements of Reusable Object-Oriented Software. Erich
Gamma, et al.
The key here is that the model is kept separate from the details of how the
application is structured (the controller) and how the information is presented to
the user (the view).
The question that may be arising in some readers' minds is "Isn't J2EE already a
framework to solve these problems?". Well, in a way it is. Many development
organizations have successfully applied the following mapping of J2EE API's to
the three roles in the MVC pattern:
► Model: JavaBeans and Enterprise JavaBeans
► View: JavaServer Pages
► Controller: Servlets
Here servlets act as controllers and are the recipients of HTTP POST requests,
and are responsible for passing POSTed data to the model and selecting which
JSP page will be invoked to display results. This is often called the "Model II" JSP
architecture.
We will come back to the MVC pattern in “MVC pattern” on page 100.
We will come back to the command pattern in “Command pattern” on page 105.
Next we discuss several design techniques we can use to realize the use cases
and discuss how to interact with external systems.
The use case analysis basically includes the following elements (Figure 5-2):
► Actors
► Use cases
► Communication associations between actors and use cases
► Relationships between use cases (also known as use case associations)
► Termination outcomes
► Conditions affecting termination outcomes
► Termination outcomes decision table
► Problem domain concept definitions in a glossary (also known as data
dictionary, but, be careful, this term is often debased)
► Flows of events and system sequence diagrams
Actor names, actor descriptions, use case numbers, use case names, use case
business events, and use case overviews as well as communication-
associations between the actors and the use cases provides an overview of the
functional requirements. The other constructs of the model document the
expected usage, user interactions, and behaviors of the system in different styles
and depth.
The main purpose of use case analysis is to establish the boundary of the
proposed software system and fully state its functional capabilities to be
delivered to the users. Other purposes of use case analysis are:
► Provides a basis of communication between end users and system
developers.
► Is the primary driver for estimating the application development effort.
► Provides a basis for planning the development of the releases.
► Allows scheduling of common functionality early in development.
► Allows development of smaller increments while maintaining broad coverage.
The are many reasons why actors are used with use cases:
► Defining actor types allows us to define use cases in terms of specific
expectations (uses) of the system. In other words, it allows us to narrow the
expectations to specific roles in which a human user would be using the
system.
► Defining actors helps to identify the system border; what is inside the system
and what is outside the system.
► Defining actor types helps us show user training needs for particular aspects
of the system.
For additional information, the reader can consult the UML User Guide, Rational
Unified Process.
Transfer money
In this use case we transfer funds from a PiggyBank account to another
PiggyBank account.
► Input:
– Customer ID
– Credit account number
– Debit account number
– Amount to be transferred
► Basic path:
1. The customer enters the required input information and submits the
request.
2. The system checks that both accounts exist, that the customer is owner of
the debit account and that the amount to be transferred is lower than the
debit account current balance.
3. The system debits the customer account and credits the other account by
the specified amount.
4. The system displays the customer a summary of the transaction.
► Alternative path:
2b. One of the checks fails.
3b. The system displays a message explaining why the transaction cannot be
completed.
Use case analysis typically includes additional use cases for main business
objects life-cycle management, also known as CRUD use cases, where CRUD
stands for create-read-update-delete.
Therefore, with regards to the PiggyBank class diagram, the PiggyBank use
case analysis should also include a display balance use case.
The actors of CRUD use cases can be different depending on the instruction. For
instance, delete use cases are usually reserved for privileged users.
This ends the use case description at analysis level. Additional steps and use
cases can be added at design time:
► Login and logout use cases
► Every use case can be started by checking whether the user is logged in and
has enough privilege to perform the action
► Physical communication with the city bank might fail and lead to an additional
alternative path
► Database transactions, included commits and rollbacks, could be specified in
more details
Figure 5-4 shows a use case description edition in VisualAge for Java.
The PiggyBank display balance use case would be commented like this:
/**
<PRE>
Display balance : displays the balance of a PiggyBank account.
Input :
- account number
Basic path :
1. The customer enters the required input information and submits the
request.
2. The system checks that the account exists and that the customer is its
owner.
3. The system records the transaction. (?)
4. The system displays the customer the account balance.
Alternative path :
2b. One of the checks fails.
3b. The system displays a message explaining why the transaction cannot be
completed.
</PRE>
*/
public class DisplayBalance extends UseCase
{
.........
}
There is no ideal way—there are several neat techniques that do the things right.
Each of the techniques brings different enhancements that can be combined. We
start from a very basic approach, then we refine the concepts to keep it simple,
while leaving no major drawback. Finally, we present the latest and finest
improvements that help having a modular, versatile design.
Servlet mapping
This method maps each use case to a servlet class.
Tip: The servlet logic is coded in a doGet or doPost method, dependent on the
HTML code that invokes the servlet and specifies a GET or POST method. As
a good practice you can always code both doGet and doPost methods and call
a performTask method that contains the logic.
Remember the servlet model implies there is usually one class instance in the
JVM memory, which is multithreaded, unless the servlet implements the
javax.servlet.SingleThreadModel interface.
It must be also observed every servlet alias has to be declared in the servlet
engine configuration (see Chapter 15, “Assembling the application” on page 389
and Chapter 16, “Deploying to the test environment” on page 431). If the
application has many use cases, its maintenance can become tedious unless the
servlet name is kept to the fully-qualified class name and will therefore be
accessed through an URL, such as:
http://hostname/webapp/itso.was4ad.servlet.ServletName
The section “Servlet multiplexing” on page 104 explains how to get rid of this
limitation, multiplexing entry points into one servlet.
Figure 5-6 shows the PiggyBank servlet use case realization diagram.
MVC pattern
As a good structure for e-business applications, we suggest to isolate the
business logic of an interaction from the work flow and the view by using the
model-view-controller paradigm. This leads us to the three components of
program logic as shown in Figure 5-7:
► The user interface logic is the view and contains the logic which is necessary
to construct the presentation.
► The servlet acts as the controller and contains the logic which is necessary
to process user events and to select an appropriate response.
► The business logic is the model and accomplishes the goal of the interaction.
This may be a query or an update to a database.
Servlet
(Interaction
Controller)
Business
Logic
Figure 5-7 Web application model with MVC pattern
Facade pattern
Figure 5-8 shows a very simple, fast and convenient way of designing the use
cases into the system with regards of the MVC pattern by using the facade
pattern (Patterns In Java, Volume 1. Mark Grand; and Design Patterns:
Elements of Reusable Object-Oriented Software . Erich Gamma, et al.).
Customer Business
Servlets Use cases objects
Figure 5-8 Use case facade pattern
Figure 5-9 Application entry from the Web and from a standalone client
CashCheque
<<realize>> (from usecase)
+ CashCheque()
+ run()
Cash a cheque UseCase
(from usecase)
+ UseCase()
Transfer
<<realize>> (from usecase)
+ Transfer()
+ run()
Transfer money
Figure 5-10 PiggyBank facade use case realization diagram
Stand-alone clients could also want to easily start the use cases in new threads.
This can be done by having the use case classes implement the
java.lang.Runnable interface, as shown in Figure 5-11.
Servlet multiplexing
Up to now, we have been designing entry points from the Web into the
application using servlets. The section “Servlet mapping” on page 99 outlines the
drawback of having multiple servlets, typically one per use case. This can easily
be improved by using a multiplexing mechanism based upon a URL parameter
called action or id, operation—you name it. In this case, there is only one servlet
that controls the entire application entries: the controller servlet. The incoming
requests therefore contain the servlet name or its alias and an ID as a URL
parameter to specify the action to perform in the system:
http://hostname/webapp/ControllerServlet?action=transfer
At first sight this looks like a bottleneck. Actually, given the servlet thread and
instantiation model, this reduces code overhead while keeping full thread flow
through the servlet code.
The drawback is that the controller servlet can be very big, forking the execution
flow with a long if-then-else-if list, such as:
String action = request.getParameter("action");
if (action == null) {
// return error message
} else if (action.equals("transfer")) {
// call transfer use case
} else if (action.equals("cashCheque")) {
// cash cheque use case
} else if ...
Commands are used as shown in Figure 5-12, where the servlet instantiates a
command object. Then, the servlet sets the input parameter of the command and
executes it. When the command has finished performing the business logic, the
result, if any, is stored in the command, so that the servlet or the view can get the
result values by interrogating the command object.
Server
Client
View
(User
Interface
Command
Logic)
Business
logic
Servlet
(Interaction
controller)
We recommend that you implement the command as a JavaBean, that is, a Java
class with naming restrictions:
► There must be a method for each input property: void setXxxx(Xxxx val);
► There must be a method for each output property: Xxxx getXxxx();
Server
Client
Display Command
View
Command
(JSP)
Business
logic
Servlet
For more information and advanced command pattern design, see the redbook
Design and Implement Servlets, JSPs, and EJBs for IBM WebSphere Application
Server, SG24-5754.
View
Display Command
(JSP)
Command
Business
logic
Servlet
Command granularity
The command pattern can be used to reduce the overhead of cross-tier
communication.
There is no perfect answer to the trade-off between good interface design and a
reduced communication overhead. Our proposal is, that the requesting tier (the
servlet) of a communication has to design the commands exactly to support its
tasks. The idea is, that a servlet should only execute one command per
invocation which encapsulates all of the controller function except for the HTTP
request parsing. That means when implementing an e-business application, the
servlet only interprets the HTTP request and executes commands. As a
consequence of this approach, we will get as many commands as we have
server interactions for a given use case.
A way to structure things is to group them into object categories. If we look into
the object modeling chapter, we can see the PiggyBank business objects are:
► Customer
► Account
And if we look at the complete use case diagram, including the CRUD use cases,
we can see that they fit into two categories:
► Customer-related use cases
► Account-related use cases
Steps 1 and 2 can often be cached, but there are still three round-trip messages
required per instance.
Caching
Using commands, the cross-tier communication is reduced to one round-trip per
task. Caching is a technology which can be used to reduce this to even less than
one.
The principle of caching is simple: Do not ask a question twice if you can do it
once and save the result to use the second time. This principle can be difficult to
In e-business applications, there are two types of information that can be cached:
► Formatted information such as whole or partial HTML pages can be cached.
This works well when many people need to view the same material presented
in the same way, such as on a sports or news site. Caching partial pages
adds the flexibility to customize pages for users while still retaining many of
the benefits of caching. Because view commands represent partial HTML
pages it makes sense to cache those commands.
► Data can be cached. This works well when the same data has to be viewed in
different ways. This means that commands (which are executed by the view
commands) can be cached.
The two types can be used together. For example, a commerce site may cache
product descriptions in a formatted form, while caching customer-profile
information as data.
Another common practice is to cache data in the user HTTP session. This can be
done after the login, when the session is established. It is very useful to perform
such an initial load when the presentation logic plans to use a common
information many times. In the PiggyBank application, we often offer the
customer to select an account from the account list. It is therefore interesting to
cache this information in the user session, and retrieve it from any JSP that
needs it. That makes the input screen more convenient without having to call the
business logic layer.
If an actor initiates the interaction with the system, it is an initiating actor of the
system. If the system initiates the interaction with the actor, this is a supporting
actor. It must be noted that an actor can be both an initiating and a supporting
actor of the system, playing different roles.
Use Case 1
Actor A
Use Case 2
From the A system point of view, our system is considered as an initiating actor,
as shown in Figure 5-16.
Our system
Use Case 2a
There is a tight relationship between our system use cases and the supporting
actors relationships. Most of methodologies recommend to represent external
use cases relationships using internal relationships with proxy use cases
(Figure 5-17).
<<include>>
Now external use cases have their equivalents inside our system.
For the PiggyBank, the only supporting actor is CityBank. This provides one
supporting use case to the system: validate a cheque. This can be modeled as a
proxy use case included in the Cash a cheque use case (Figure 5-18).
<<include>>
Customer
<<proxy>>
Cash a cheque
Validate cheque
Figure 5-18 PiggyBank proxy use case
This class will be responsible for all the communication aspects regarding the
corresponding supporting actor. It can use helpers classes, connection pooling
and other advanced objects to perform its task. These all form a boundary layer
against the supporting actors.
Figure 5-19 represents the proxy use case realization for PiggyBank. We can see
the realization class responsible for the communication with the CityBank. Its
name concatenates the supporting actor name with the Agent postfix. It has one
method to validate a cheque against the CityBank. All communication and
business considerations are hidden from the rest of the PiggyBank system.
CityBankAgent
(from agent) <<communicate>>
+ CityBankAgent()
+ validateCheque()
CityBank
(from Use Case View)
The business analyst, who has identified a supporting actor and its proxy use
cases, creates an agent class:
► Right-click on itso.was4ad.agent -> Add -> Class and name the new class
CityBankAgent (Figure 5-20).
► Right-click on CityBankAgent -> Add -> Method. Click Next. Name the method
validateCheque (Figure 5-21). Make it return a boolean and click Add to add
one parameter.
► In the Parameters window (Figure 5-22), name the parameter cheque and set
the type radio button to Reference Types and enter Cheque in the field below.
Click Add once and Close then Finish.
Like use case modeling where the actors are human users of the system, user
interface analysis is a means to understand the interaction between users and
their environment. It is an end to end process description that details the steps
for a user-system interaction. User interface analysis method focuses heavily on
the process and techniques for collecting data from users. By integrating task
Screen composition
Both activities are tightly tied together and should be performed in a parallel
fashion by business analysts. Therefore, the user interface technical aspects
should be kept as simple as possible in a first step. That is, we do not deal with
the complicated aspects of designing an HTML page. This is a job for the Web
developer. For the requirements modeling phase of the project, the latter role is
stand-by and waits for business analyst UI output to start its more technical and
creative activity. The business analyst can focus on the business and functional
aspects of the user interface, and the main concerns are:
► What information and input elements are shown on the screens
► The navigation between the screens
► Informal description of the information transfers between the screens
This activity should produce very simple screens (Figure 5-23).
Chapter 10, “Development using WebSphere Studio” on page 237 shows how to
perform screen composition with WebSphere Studio.
Navigation
Typically, the application offers a main menu presenting the list of the use cases
the user can initiate, possibly in a hierarchical manner depending on the size of
the application. Figure 5-25 shows the PiggyBank application main menu.
After the main menu allows the user to select a use case among a list, zero, one
or several input screen can be presented to the user, who can then initiate the
use case by submitting the complete request. That calls the corresponding use
case, which usually performs OK and returns the result, typically a page with the
summary of the transaction to the customer and a link to the main menu
(Figure 5-26).
<<menu>> <<input>>
Main menu [login successful] Login
<<input>> <<input>>
Input 1 Input n
<<result>>
<<active>>
Display
[Use case OK] Use case
summary
To make the navigation easier, the main menu can be permanently presented to
the user, as an HTML frame or a component included by every page. JSP
specifications provides two types of include mechanisms to make that possible:
► <jsp:include page="relativeURL" flush="true">
► <%@ include file="relativeURL" %>
<<active>>
Establishing Note: any command starts with
session logon check and redirects
to login page if needed
[!login successful]
<<error>> <<active>>
loginError Initial
.html Load
<<menu>>
Welcome
.jsp
[input not OK]
We begin with the case where the input screen submission leads to a use case.
We call such a command a use case command. Figure 5-28 represents the
navigation state diagram adaptation for a single input screen.
<<result>>
Display <<active>>
summary [Use case OK] Use case
Intermediate commands
Now we consider the case where the input process leading to a use case is
performed through several input screens, and an input screen submission leads
to the next input screen.
The navigation state diagram is therefore a little bit different (Figure 5-29).
We also introduce the J2EE support included with Rose, and describe how we
can use it to model and then generate code for one of the EJBs from our
PiggyBank application.
Fortunately Rose allows us to avoid this drudgery by taking our model and
automatically generating Java classes from it. The generated code directly
reflects the model, including the associations between classes and the attributes
and operations contained within them.
Round tripping
The next step is to take the code generated from our initial model and develop it
into our application. During the course of development the classes will be
modified substantially—often adding new methods and fields or modifying those
that exist already. If the Rose model is to continue to be of use to us as a source
of information about the application design it is essential that changes in the
code are reflected in the model. To do this by hand requires discipline on the part
of the developer—even if the developer does remember to update the model the
interruption to his train of thought caused by starting up Rose and updating the
model can adversely affect productivity.
Fortunately there is a solution to this problem in the form of round tripping . This
process combines the code generation features of Rose with its reverse-
engineering capabilities—the ability to create a model by examining Java code.
In a round tripping scenario code is generated from a model and modified by a
developer. The developer’s changes are then reverse-engineered, updating the
original model in Rose. The term round tripping describes the round trip from
model to code, and back to the model again.
In the rest of this chapter we discuss how we can use Rose to generate and
reverse-engineer our application code. We also take a look at how to integrate
Rose with VisualAge for Java, and how we can use the J2EE capabilities of Rose
to model and generate code and deployment descriptors for EJBs.
We change the default language using the options dialog, which we display by
making the menu choice Tools -> Options. Once in the dialog, we select the
Notation tab, which is illustrated in Figure 6-1. Change the default language to
Java and click OK to save the changes.
We generate Java code from Rose by selecting the classes for which we want to
generate code, and J2EE / Java -> Generate Code from the context menu. In this
case we select all four classes in the diagram. Because this is the first time we
have generated code for this component, the dialog shown in Figure 6-3
appears. This dialog allows us to specify the destination for the generated code.
The destination must be an entry in the class path defined in the Rose project
specification dialog—if the desired path is not included already, we can add it by
clicking Edit.
Tip: This assignment is remembered for the entire Rose project. If you want to
generate code for each module into separate directories, as described in
Chapter 9, “Development using the Java 2 Software Development Kit” on
page 183, you may find it easier to create a separate project in Rose for each
module instead of having the entire model in one project.
package itso.was4ad.data;
import java.io.Serializable;
/**
* Data object representing a customer
*/
public class CustomerData extends DataBean implements java.io.Serializable {
private int id;
private String name;
/**
* @roseuid 3B5DB6270126
*/
public CustomerData() {
}
/**
* Access method for the id property.
* @return the current value of the id property
*/
public int getId() {
return id;
}
/**
* Access method for the name property.
* @return the current value of the name property
*/
public String getName() {
return name;
}
}
Note: The getter methods in the source file were generated because we
defined the attributes in the Rose model as simple, read-only bean properties.
We browse through the project class path in the reverse engineer dialog
(Figure 6-5) to select the source files we want to reverse engineer. We click Add
to add files in the top right panel to the list at the bottom, then Select All and
Reverse to reverse engineer the code.
Tip: If you select classes you want to reverse engineer in the model before
you open the dialog, the selected classes are automatically added to the
bottom panel.
To fix this problem you should open the Java project specification dialog ( Tools ->
Java / J2EE -> Project Specification) and add the following JARs to the class
path:
%WAS_HOME%\lib\j2ee.jar <== J2EE library
%WAS_HOME\lib\ras.jar <== WebSphere logging library
When we were developing this book we discovered that Version 4.0 of VisualAge
for Java—the version we were using—was so new that it was not recognized or
officially supported by Rose. We used the following procedure to fool the Rose
installer into believing that a supported version of VisualAge was installed:
► Export VisualAge for Java Version 4.0 registry entries
► Edit exported file to change Version 4.0 to Version 3.5
► Import edited file
► Install Rose
► Delete imported Version 3.5 settings
Although Rose does not at this time officially support VisualAge for Java Version
4.0, we encountered no problems using the bridge—we believe there are no
significant changes to the VisualAge for Java tool API that would prevent the
bridge from functioning in the new version. If you do encounter problems,
however, you may have to wait until Rational officially supports the new version
before you will be able to obtain support for the feature.
In the tree view you should see a single key in the tree view, named 4.0. Select
the key, then choose the menu option Registry -> Export Registry File to export
the contents of the key to a file. We named our file C:\TEMP\vaj.reg.
In the editor, perform a global find and replace, replacing all instances of the
version ID 4.0 with 3.5. Save the file.
If you now check the registry you should find an additional 3.5 key along with the
4.0 key we saw earlier. Do not start VisualAge for Java with this extra key in the
registry.
Install Rose
Follow the normal Rose installation procedure—the installer should detect that
VisualAge for Java is installed and automatically select the option to install the
bridge.
We do this in the Rose Java project specification dialog, accessed from the menu
via Tools -> Java / J2EE -> Project Specification and clicking on the Code
Generation tab (Figure 6-8). Change the IDE property to specify VisualAge for
Java, rather than the internal editor.
Figure 6-8 Setting the IDE in the Rose project specification dialog
We can do this from the VisualAge for Java Quick Start dialog shown in
Figure 6-10. Open the dialog by pressing F2 or selecting File -> Quick Start from
the menus. In the dialog select Basic -> Rational Rose VAJ Link Plugin Toggle
and click OK to start the plugin.
Figure 6-11 Rose VisualAge for Java link plugin startup message
Figure 6-13 Selecting the VisualAge for Java project for the generated code
We select the appropriate project and click OK. The selected project is
remembered for future use. The code for our classes is generated and imported
into the appropriate project in VisualAge for Java.
We can do this from VisualAge for Java by selecting the classes we want to
update in the model and Tools -> Rational Rose Update Model from the context
menu.
We updated our code in VisualAge for Java with the final version included in the
sample code to illustrate this point. When we reverse-engineered the updated
code into Rose we ended up with the diagram shown in Figure 6-14.
We can also initiate the reverse-engineering of code from within Rose. First we
select Tools -> Java / J2EE -> Reverse Engineer from the Rose menu.
In the reverse engineer dialog (Figure 6-15) expand the class path entry (shown
on the top left) that corresponds to the VisualAge for Java project we are working
with. Locate the package directory containing the files you want to reverse
engineer—when you select the package directory the source files contained
within are displayed in the panel at the top right.
Select the files you want to reverse engineer and click Add to add them to the list
of files in the panel at the bottom of the dialog. Alternatively you can add all files
in a package directory by clicking Add All, or recursively add all files in the
directory and every sub-directory by clicking Add Recursive.
When all of the files you want to reverse engineer are included in the bottom
panel click Select All to select all the files, then Reverse to start the process.
When all files have been reverse-engineered, click Done to return to your
updated model.
XMI toolkit
The XMI toolkit is a component of VisualAge that you can use to maintain
consistency between your model and your code. If you want to use the XMI
toolkit you must first make sure that you selected it as an option when you
installed VisualAge for Java.
The XMI toolkit provides a GUI interface as well as command-line tools for
performing these operations.
We prefer and recommend the use of the Rose bridge to manage updates in your
code and model, mainly because of the superior integration—information can be
exchanged between the two tools on the fly, as opposed to having to save work
to and from files.
The XMI toolkit can be used with other modeling tools, not just with Rose. If you
want to learn more about the XMI toolkit, we suggest you consult the
documentation available from the toolkit GUI’s Help menu.
This option is the least seamless and involves the most effort on the part of the
developer—we recommend its use only as a last resort.
Defining the criteria that we use to define which of our business objects should
be implemented as EJB components is beyond the scope of this book—we refer
you to the many publications that discuss this subject.
The version of Rose we are working with provides support for different levels of
the J2EE specification. Before we start to create our EJB we must check the
settings in Rose to make sure we generate code that is compatible with
WebSphere Version 4.0.
We open our Java project specification in Rose, using the menu option Tools ->
Java / J2EE -> Project Specification. This displays the project specification
dialog. We click on the J2EE tab to display the dialog shown in Figure 6-17.
Creating a package
Next we create a package in our logical view in which to place the classes that
make up our EJB. We name the package itso.was4ad.ejb.account. We create
the hierarchy of packages in the Rose logical view using the New -> Package
option from the logical view’s context menu. We create each package in turn until
we get the structure shown in Figure 6-18.
As we enter the bean name Account into the Bean Name field in the dialog Rose
automatically fills in the other fields with names according to the convention we
specified earlier. We click OK to create the EJB. Rose adds the components to
our class diagram (Figure 6-20).
We enter the name of our CMP field, number, in the Name field, and click the
button next to the greyed-out Type field to define the field type.
We expand the Java Types and select int from the list (Figure 6-22).
customerId int Customer number of the customer that owns the account
The class diagram for our EJB now appears as shown in Figure 6-23.
We add the attribute by selecting the primary key class in the logical view and
New -> Attribute from the context menu (Figure 6-24).
We name the attribute number, and double-click to open the Field Specification
dialog. We change the field type to int and click OK.
In the dialog shown in Figure 6-25 we enter the finder name into the Name field,
and click the button to the right of the Return Type field to select the return type.
Back in the original Method Specification dialog we click OK to add the finder to
the EJB. The EJB home interface in the class diagram is updated (Figure 6-28).
Having completed these updates, the class diagram now appears as shown in
Figure 6-30.
First of all we go to our class diagram and select all four of the classes that make
up our EJB—the home, remote, bean and primary key classes. We then
right-click on our selection and select Java / J2EE -> Generate Code from the
context menu.
The dialog shown in Figure 6-31 appears. We must tell Rose where to store the
generated Java source files for the itso package hierarchy. We want to save our
source in the directory D:\ITSO4AD\dev\src\ejb. We click on Edit to alter the
class path to include this new directory.
We add our new directory to the class path and click OK (Figure 6-32).
When the dialog closes, code generation in complete. We can now look in the file
system to examine the generated code. The code has been generated in the
structure shown in Figure 6-34.
Note that Rose has generated a META-INF directory, which contains a deployment
descriptor for our new EJB. This deployment descriptor is shown in Figure 6-35.
<ejb-jar>
<enterprise-beans>
<entity>
<ejb-name>AccountBean</ejb-name>
<home>itso.was4ad.ejb.account.AccountHome</home>
<remote>itso.was4ad.ejb.account.Account</remote>
<ejb-class>itso.was4ad.ejb.account.AccountBean</ejb-class>
<persistence-type>Container</persistence-type>
<prim-key-class>itso.was4ad.ejb.account.AccountKey</prim-key-class>
<reentrant>False</reentrant>
<cmp-field>
<field-name>number</field-name>
</cmp-field>
<cmp-field>
<field-name>customerId</field-name>
</cmp-field>
<cmp-field>
<field-name>amount</field-name>
</cmp-field>
<cmp-field>
<field-name>checking</field-name>
</cmp-field>
</entity>
</enterprise-beans>
</ejb-jar>
Define an EJB group in VisualAge for Java, then select EJB -> Add -> Import
from Rose or XMI.
In the Import SmartGuide select the Rose model file (.mdl), the EJB group, and
enter the name of the package for the code. Skip the next page (virtual paths)
and click Finish. The EJBs are added to the EJB group and can be tailored in
VisualAge for Java.
The first point can be left quite vague and is often determined late in the project,
either according to time availability or because of an incremental development
strategy, which mainly focuses on the next iteration dates, leaving the later
schedule in the dark.
The second point is far more critical and it is surprisingly often decided with so
little consideration, according to enterprise-wide standards, contractual
agreements or just because of a hasty project start. And this is a shame,
because choosing the right basis for an application is just like for a house: it will
help its development and decide its robustness.
In this section, we introduce some concepts and list some major nontechnical
dangers when considering a framework. Avoiding these brings the necessary
freedom that allows focus on the technical considerations.
Sometimes the term framework is also used to refer to an object model that can
be extended (mainly by inheritance) to suit the custom application needs. The
backbone code then considers inherited objects just like framework parent
objects through polymorphism.
Additional information about the Java core classes can be found at:
http://java.sun.com/j2se/1.3/docs/api/index.html
Frameworks drawbacks
Any medal has its reverse. Frameworks bring some advantages and introduce
some limitations:
► Frameworks are not versatile
They are good for what they have been designed for, and nothing else.
Different things have to be done to work around the framework. This is what
makes the framework choice critical. So many projects end up with an
application that has replaced all the framework parts by custom code. In that
case, the use of a framework can result in loss of time.
► Frameworks are not flexible
If you want some minor change to it, it is often impossible to get it from the
provider as any change would break other existing codes. This is particularly
true with a widely-adopted framework where backward compatibility is critical.
► Frameworks impose a way of thinking
Custom code has to stick to the framework jigsaw. Different ideas just do not
fit. If the framework is well designed, this can be a good thing because it
prevents from bad practices.
► Frameworks are specific
If the team is new to it, the learning curve can be long. This needs to be
qualified: the concepts behind are usually standard and the learning process
is therefore limited to new terms.
After the first technical presentations or classes, the team usually rejects the
potential or chosen framework as is. It is considered as “bad” and “useless.”
Sometimes the comments are worse or even abusive. People do not want to use
it. They have their own ideas “to do it better.” Wily programmers talk to their
managers in terms of cost: “we can do it ourselves and it will be cheaper.” This is
seldom true. For large repetitive projects, it is most of the time false.
So, what can we do with that? Actually, the solution falls within the competence of
the project manager and his skills in the human relationships area. Forcing the
team to adopt the framework just will not make it. This negotiation is a matter of
honesty. Using a framework is often a win-win situation that the technical persons
do not see. They know how things have to be done and they often feel the
framework as a denial of their capacity to do what the framework does.
Actually, this is the contrary: they have to be aware that they are the best people
for the job, because they already know the environment they will have to integrate
with. They can add their own value to the framework in the fastest way. And if
they are conscious of their value, which is usually not the case, they can leave
their fear of doing new things and learn from a new experience.
There is more than coincidence if some tools fit and some do not. Some
frameworks have been designed with or for some tools. Sometimes both.
For instance, WSBCC has been designed with and for VisualAge for Java. On
the contrary, Struts has been developed totally independently and is more
difficult to integrate with VisualAge for Java. This also shows the mismatch
between two different development worlds: commercial and open-source.
Actually, this can be solved as we describe in “Generating the WTE webapp file
from a web.xml file” on page 287.
Note: This section and its subsections contain documentation taken from the
official Jakarta project Struts home page and from the official Struts user guide
at:
http://jakarta.apache.org/struts
http://jakarta.apache.org/struts/userGuide/introduction.html
It also contains some quotes from Kyle Brown’s articles on Struts in the
VisualAge Developer Domain (VADD):
http://www.ibm.com/vadd
► Apache Struts and VisualAge for Java, Part 1: Building Web-based
Applications using Apache Struts
► Apache Struts and VisualAge for Java, Part 2: Using Struts in VisualAge
for Java 3.5.2 and 3.5.3
The goal of the Struts project is to provide an open source framework useful in
building Web applications with Java Servlet and Java ServerPages (JSP)
technology. Struts encourages application architectures based on the
model-view-controller (MVC) design paradigm, colloquially known as Model II.
The problem is that programmers are too often faced with "reinventing the wheel"
each time they begin building a new Web-based application. Having a framework
to do this work for them would make them more productive and let them focus
more on the essence of the business problems they are trying to solve, rather
than on the accidents of programming caused by the limitations of the technology
(No Silver Bullet: Essence and Accident in Software Engineering. Fred Brooks.
IEEE Computer, April 1987).
Simply put, Struts is an open-source framework for solving the kind of problems
described above. Information on Struts, a set of installable JAR files, and the full
Struts source code is available at the Struts framework Web site. Struts has been
designed from the ground up to be easy to use, modular (so that you can choose
to use one part of Struts without having to use all the others), and efficient. It has
also been designed so that tool builders can easily write their tools to generate
code that sits on top of the Struts framework (Ibid. Kyle Brown).
The controller bundles and routes HTTP requests from the client (typically a user
running a Web browser) to framework objects and corresponding extended
objects, deciding what business logic function is to be performed, then delegates
responsibility for producing the next phase of the user interface to an appropriate
view component like a JSP.
Each mapping defines a path that is matched against the request URI of the
incoming request, and the fully qualified class name of an action class (that is, a
Java class extending the Action class) which is responsible for performing the
desired business logic, and then dispatching control to the appropriate View
component to create the response.
The Struts ActionServlet basically plays the same role as the more simple
itso.was4ad.webapp.controller.ControllerServlet we defined in the first
version of the PiggyBank application.
Action objects
The action object can handle the request and respond to the client (usually a
Web browser), or indicate that control should be forwarded to another action. For
example, if a login succeeds, a loginAction object may want to forward control to
a mainMenu action.
Action objects are linked to the application controller, and so have access to that
servlet methods. When forwarding control, an object can indirectly forward one
or more shared objects, including JavaBeans, by placing them in one of the
standard collections shared by Java servlets.
Action
+ perform()
The action set is very similar to the command set we defined in the first version of
the PiggyBank application. A corollary of this observation is that the action
classes to define in Struts correspond to the application design use cases in the
same way.
Form beans
JavaBeans can also be used to manage input forms. A key problem in designing
Web applications is retaining and validating what a user has entered between
requests. With Struts, you can easily store the data for a input form in a form
bean. The bean is saved in one of the standard, shared context collections, so
that it can be used by other objects. The action object receives it as input to
perform its task.
In the case of validation errors, Struts has a shared mechanism for raising and
displaying error messages. It automatically invokes the ActionForm.validate
method whenever the JSP page containing the form corresponding to this
ActionForm submits the form. Any type of validation can be performed in this
method. The only requirement is that it returns a set of ActionError objects in the
return value. Each ActionError corresponds to a single validation failure, which
maps to a specific error message. These error messages are held in a properties
file that the Struts application refers to.
ActionServlet Action
+ perform()
<<instantiate>> <<use>>
ActionForm
+ validate()
The action object can then check the contents of the form bean before its input
form is displayed, and also queue messages to be handled by the form. When
ready, the action object can return control with a forwarding to its input form,
usually a JSP. The controller can then respond to the HTTP request and direct
the client to the JSP. Figure 7-4 summarizes these operations.
perform()
getXxx()
forward()
getXxx()
Custom tags
There are four JSP tag libraries that Struts includes:
1. The HTML tag library, which includes tags for describing dynamic pages,
especially forms.
2. The beans tag library, which provides additional tags for providing improved
access to Java beans and additional support for internationalization.
3. The logic tag library, which provides tags that support conditional execution
and looping.
4. The template tag library for producing and using common JSP templates in
multiple pages.
Using these custom tags, the Struts framework can automatically populate fields
from and into a form bean, raising two advantages:
► The only thing most JSPs need to know about the rest of the framework is the
proper field names and where to submit the form. The associated form bean
automatically receives the corresponding value.
► If a bean is present in the appropriate scope, for instance after an input
validation routine, the form fields will be automatically initialized with the
matching property values.
Internationalization
Components such as the messages set by the action object can be output using
a single custom tag. Other application-specific tags can also be defined to hide
implementation details from the JSPs.
The custom tags in the Struts framework are designed to use the
internationalization features built into the Java platform. All the field labels and
messages can be retrieved from a message resource, and Java can
automatically provide the correct resource for a client's country and language. To
provide messages for another language, simply add another resource file.
Code dependencies
For the simplest applications, an action object can handle the business logic
associated with a request. However, in most cases, an action object should pass
the request to another object, usually a JavaBean. To allow reuse on other
platforms, business logic JavaBeans should not refer to any Web application
objects. The action object should translate needed details from the HTTP
request and pass those along to the business-logic beans as regular Java
variables.
In a database application, the business logic beans might connect to and query
the database and return the result set back to the action servlet to be stored in a
bean and then displayed by the JSP. Neither the action servlet nor the JSP have
to know or care where the result set comes from.
Downsides
No framework is perfect. Struts cannot be all things to all people, so you lose
some things when using Struts that you can do when programming directly to the
servlet API. Possibly the biggest downside is that with Struts you have only one
servlet (the ActionServlet) serving up all of the dynamic pages of your Web
application. Having only a single servlet per application is certainly a problem
Development
See “Jakarta Struts” on page 284 for further information on application
development with Jakarta Struts.
The main technologies used are HTML, Java and XML. The skills required to
develop with WSBCC are mainly Junior JSP and Senior Java for the framework
extensions.
The development model creates a clear separation of roles that allows project
team members to focus on their specific tasks. In a typical usage, WSBCC relies
less on high programming skills because it provides components that are easy to
understand and use, from back-end connectors to user interface building blocks.
WSBCC uses the standard TCP/IP network computing architecture for client
administration, code distribution and server management. The deployed
elements are modular and can be operated with minimal system administrator
training.
Architecture
The architecture of a WSBCC Web application is based on a logical three-tier
model and standard communication protocols (Figure 7-5):
1. Back-end enterprise server
2. Middle-tier application server
3. Browser
The enterprise server, or back-end server, contains the existing core business
logic of the institution. Such a system and its messaging interface remain
untouched as the Web application uses the WSBCC set of back-end system
connector components and message formatters.
Browser
The middle-tier server hardware communicates with the clients using a TCP/IP
connection and the HTTP protocol. The WebSphere Application Server and its
associated HTTP Server runs on the middle-tier server for this purpose. It
processes requests from clients once the application is running. Handling client
requests involves managing user navigation and interface, launching business
operations that interact with back-end transactional systems, processing local
transactions and sending HTML responses to the client running JSPs (or
appropriate response for different channels).
All these entities can be externalized in XML configuration files. Here we present
them, what concepts they carry and what information is parametrized in the
standard XML files.
An operation step is an entity that represents the set of interactions with the
services that are required for a specific operation. Operation steps are managed
by operations, and each operation’s definition specifies the operation steps it will
use.
An operation flow is normally common to many different operations, with the only
differences being the data elements and the formats that are used in the
interaction with the services. Because these differences can be handled by the
formatting services, an operation flow is by nature a highly reusable part. Tasks
inside the operation flow result in one or many operation steps. Operation steps
are also highly reusable pieces of code that can be used by many different
operations.
Context
When an operation is being performed, all the global data and services required
by the application can be grouped into different sets of related information. Each
of these sets of information logically belongs to a different type of banking entity:
some related to the user, some to the branch, some to the client, some to the
server, some to the whole banking institution, and so forth.
Each of these sets of related data and services makes up a context. The data
used by an application can be considered as a context hierarchy, where each
context level is able to provide the information it contains or the information
belonging to contexts in upper levels. Figure 7-7 shows an example of context
hierarchy.
Session Session
context context
Each operation model has its own context, the operation context, with a specific
set of operation data that includes elements for data input and for data received
from external sources (for example, host or local DBM). Because the operation
context is part of the context structure, the operation can access data at different
levels in the context chain. When a service is requested, the operation will use
the more specific service associated with the identification defined in the context
chain.
Services
Services acts like connectors. They are defined in the context. When an
operation wants to interact with a service, it uses a service alias to get a
reference to the service.
Please note that the context hierarchy can contain more than one service with
the same alias. In that case, the service with the requested alias which is closest
to the operation context is returned.
The framework provides five base classes for dealing with data elements
(Figure 7-8).
DataElement 0..n
DataField DataCollection
KeyedCollection IndexedCollection
The data element hierarchy is extensible, and new classes can be derived easily
when more functionality is needed. The classes that conform to the data
hierarchy do not have exactly the same interface: only data fields have value,
and only collections have add, remove, or at methods.
However, they have common instance variables such as name, and they share a
common base class to be included inside collections (generally, collections deal
with data elements). Methods for adding, retrieving, and deleting data elements
are provided. There are also methods for setting and getting the value of the data
elements contained in a collection. To maximize reusability of code, the
DataElement class follows the composite design pattern, which is one in which
any element of the collection can itself be a collection.
To refer to a data element in an inner collection, you must provide the full path.
For example, if the data field dataField1 is inside keyedCollection1, which is
inside keyedCollection2, then to access dataField1, you would specify:
keyedCollection1.dataField1. Note that specifying keyedCollection2 is not
required because you are asking for its element called
keyedCollection1.dataField1.
Another option is to use the * modifier, for example, *.dataField1. In this case,
the first data element named dataField1 is returned. The use of the * modifier is
not recommended when there are different data elements with the same name in
the same structure, or when the structure is very complex, because of the
performance impact.
The framework provides the ability to work with or without typed data. Typed and
untyped data elements can coexist at run time, and this allows each operation to
be designed and implemented in the appropriate data typing mode. For example,
a typed data element knows how to format itself for display, how to clone itself,
and the nature of any validation required when requested to change its value.
The information that a data element knows about itself is made available by
associating it with an object of the PropertyDescriptor class. Each
PropertyDescriptor in turn is associated with a Validator and a Converter. A
typed data element is an instance of a DataElement class in which the
PropertyDescription property is not null.
One of the benefits of type-awareness is the ability to exploit object identity. Data
elements that are type-aware can dynamically construct an identifier, which
distinguishes them from other data elements of the same type. Note that this is
object identity of a business object, not a Java object. For example, two
instances of customer 123 are distinct Java objects, but are the same customer
because their identifiers are equal.
Formats
Each operation manages a set of data items, whose values may be taken from
input screens, other devices, shared data repositories (branch data, user data),
host replies to a transaction, and so forth. This data must be formatted and
combined to build the messages that are used in various ways, such as to send a
transaction to the host, write a journal record, print a form, and so forth. For each
of these steps, the data items can be formatted differently depending on the
interacting object requirements (such as a host, electronic journal, financial
printer), making the formatting process complex.
The format classes are examples of the composite design pattern. They
implement the concept of collections of elements that can themselves be
collections.
Development
See “WebSphere Business Components Composer” on page 303 for further
information on application development with WebSphere Business Components
Composer.
A well-designed environment will save you time and money, and allow you to
cope with the demands of developing applications today. In particular you should
aim to:
► Plan for productivity
– Provide tools to simplify and speed-up common tasks
– Make use of frameworks and off-the-shelf components where appropriate
– Reduce ramp-up time for new staff by using standard tools and processes
wherever possible, and by documenting the complete environment
– Automate wherever possible
► Plan for flexibility
– Structure your code into stand-alone modules that can be re-used if
requirements change or the project grows
► Plan for deployment
– Make sure you can build and deploy your code quickly and easily
– Include configurable logging and tracing in your code from day one
– Consider application performance during every activity
The chapters that follow present some ideas you can use to plan and build a
development environment suitable for your project’s needs.
In reality, however, our work will stray into the assembly area, because we will
need to create EAR files in order to deploy and unit test our code, and depending
on the size and structure of our development organization, application assembly
may well be a responsibility of the development team.
Because this book is concerned primarily with application development, and not
assembly and deployment (in the true J2EE sense of the words), we focus mainly
on the creation of J2EE modules. We do however cover the creation and
deployment of a single EAR containing all of our modules for unit testing
purposes.
Automation opportunities
Look for opportunities to automate tasks and processes in your development
environment. Automation can deliver significant advantages to a development
project. The potential benefits include:
► Improved developer productivity
► Reduced turnaround time for builds and code fixes
► Better consistency in application code
► Improved quality
► Reinforcement of development standards and policies
Within each discussions in this publication, we highlight areas where tooling and
automation may pay dividends, and suggest ways in which you can leverage
automation to improve your development environment.
First we describe how the various files that make up a project may be organized
in a directory structure, and how to use the SDK tools to compile and build J2EE
modules for assembly into an application.
Next we investigate how Ant, a popular open source build tool from the Apache
Jakarta project, can be used to automate these build tasks. It describes how to
install and configure Ant to build the sample PiggyBank application.
The final part of this chapter discusses how to work with meta-data files, and
provides hints and tips to assist you with developing your own J2EE applications.
If you plan to use more sophisticated tools during your development project, you
may still find the discussions in this chapter give a useful overview of the
low-level activities involved in building a J2EE application for WebSphere.
Just as no two projects are exactly alike, no one scheme fits every situation. In
this chapter we present solutions for our example application, and attempt to
justify and explain the decisions we have made.
All of the files required to develop and build the PiggyBank application are
managed under a single directory structure. Under that top level directory we
created separate directories for source code, intermediate code produced while
building the application, and the deliverable application modules. We also
created a directory for documentation, which includes the output from the
analysis and design activities described in Part 2, “Analysis and design” on
page 81, as well as that generated from the source code using the javadoc tool,
and other documentation created during the course of the project.
We decided to further split the source code directory along the lines of
deliverables, creating five separate source subtrees, one for each deliverable.
The justifications for the split along these lines are:
► There are clear boundaries between code that will be deployed into different
parts of the infrastructure
► We separate project specific code from the code that makes up reusable
components
► Illegal dependencies between modules can be enforced at compile time—for
example EJB code that depends upon a class in the servlet tree will not
compile
We created a separate directory in the Web application part of the source tree in
which to manage the Web content for the Web application, in which we include
the JSPs as well as static files such as HTML and images.
Note: Windows NT does not make it easy to specify the case of directory
names using Explorer. The META-INF and WEB-INF directory names must be in
upper case in J2EE archive files. You can create the directories in upper case
using Explorer or using the command line mkdir command, but Explorer may
display the names in the incorrect case.
Development tree
Documentation directory
Deliverable modules
Source tree
Meta-data directories
Web content
For this example we use the following tools in conjunction with the Windows NT
command line:
javac This is the Java compiler. It takes Java source files and compiles
them into class files containing Java byte code.
jar This tool is used to manage Java archives, which are collections
of multiple files rolled up into a single archive file.
javadoc This tool processes Java source files looking for specially
formatted comments that contain documentation about the
nearby code. All of the Java API reference documentation is
generated using the javadoc tool.
We must update our PATH so that we can locate the SDK tools. The SDK is
installed in the java subdirectory of the WebSphere Application Server directory,
and the tools are located in the SDK bin directory. We must also update our
class path so that the compiler can locate the J2EE runtime libraries, as well as
dependent application classes. The commands to update the PATH and
CLASSPATH are shown in Figure 9-2.
set PATH=%PATH%;D:\WebSphere\AppServer\java\bin
set CLASSPATH=%CLASSPATH%;D:\WebSphere\AppServer\lib\j2ee.jar
set CLASSPATH=%CLASSPATH%;D:\ITSO4AD\dev\build\common
set CLASSPATH=%CLASSPATH%;D:\ITSO4AD\dev\build\ejb
set CLASSPATH=%CLASSPATH%;D:\ITSO4AD\dev\build\usecase
Figure 9-2 Setting environment variables for building on the command line
Note: We do not have to include the client and Web application build
directories in the class path, because no other code should have
dependencies upon classes in these directories. By not including them in the
class path, we ensure that erroneous dependencies are discovered at compile
time, and not during deployment.
Web Standalone
client client
Use cases
EJBs
Common code
We do not have to specify the -classpath parameter because javac will pick this
up from the environment.
D:\>cd ITSO4AD\dev\src\common
Figure 9-4 Compiling the source files for the common code
This procedure must be repeated for the EJB source files, then the use case,
standalone client and servlet sources. For each subtree we change to the base
directory where the source is located, create the file list and execute the Java
compiler. Once we have completed all five sets of compilations we have a full set
of .class files in the directory structure under D:\ITSO4AD\dev\build.
D:\ITSO4AD\dev\build>cd common
Note: The jar command in Figure 9-5 must be entered on a single line.
The EJB JAR file differs from a regular JAR file, however, in that it also requires
deployment descriptor information that describes the EJBs found in the JAR file.
Our EJB deployment descriptor is named ejb-jar.xml and is located in the
src\ejb\META-INF directory. The same directory also contains our manifest
information, and files that describe WebSphere-specific deployment information;
see “Working with meta-data” on page 226 to learn more about these additional
meta-data files.
D:\ITSO4AD\dev\build\common>cd ..\ejb
Note: We have to make sure that the META-INF directory name in the archive
is correctly specified in upper case, even though the Windows file system is
not case sensitive. We do this by the using upper case name in the
parameters to the jar command.
D:\ITSO4AD\dev\build\ejb>cd ..\webapp
D:\ITSO4AD\dev\build\webapp>mkdir D:\temp\wartemp\WEB-INF\classes
D:\ITSO4AD\dev\build\webapp>xcopy * D:\temp\wartemp\WEB-INF\classes /s
D:itso\was4ad\webapp\command\CommandConstants.class
D:itso\was4ad\webapp\command\Login.class
D:itso\was4ad\webapp\command\Logout.class
D:itso\was4ad\webapp\command\MainMenu.class
D:itso\was4ad\webapp\controller\Command.class
D:itso\was4ad\webapp\controller\ControllerServlet.class
D:itso\was4ad\webapp\controller\Error.class
7 File(s) copied
Figure 9-8 Creating temporary directory structure used for building the WAR file
Figure 9-9 shows how we can now create the WAR file in the modules directory,
using the -C parameter on the jar command to pull in the files from the other
locations, before removing the temporary directory.
D:\ITSO4AD\dev\build\webapp>rmdir /s D:\temp\wartemp
D:\temp\wartemp, Are you sure (Y/N)? y
Note: The WEB-INF directory name must be spelled in upper case in the
parameters to the jar command.
Generating documentation
javadoc provides a large number of options that control the output from the tool.
In this example we use a simple subset to generate the documentation for our
code, using the standard doclet supplied with the Java 2 SDK. For a more
complete description of the javadoc tool see the SDK documentation at:
http://java.sun.com/j2se/1.3/docs/tooldocs/javadoc/index.html
javadoc options
The options that we use to invoke javadoc are described below:
-private This option causes the tool to generate documentation for all
classes and members, regardless of their visibility. We chose this
option because the intended readers of the documentation will
be maintaining the application code, not just using an
implemented API.
-d This option defines the target directory for the generated HTML.
We decided to generate the documentation for all of the code
into a single directory, the D:\ITSO4AD\dev\doc\javadoc
directory.
-use This option causes pages describing class and package usage to
be generated
-windowtitle This option defines title of the browser window in which the
documentation is displayed
-doctitle This option defines the title on the documentation index page
D:\ITSO4AD\dev\build\client>cd D:\ITSO4AD\dev\src
Traditionally these tasks have been performed by shell scripts or batch files in
UNIX or Windows environments, or by using tools such as make. While these
approaches are still valid, developing Java applications—especially in a
heterogeneous environment—introduces new challenges. A particular limitation
is the close-coupling to a particular operating system inherent in using these
tools.
What is Ant?
Ant attempts to solve some of these issues by providing a framework that
implements extensions in Java, instead of issuing shell commands to perform
build tasks. The base Ant package comes with a comprehensive set of standard
extensions (known as tasks in Ant) for performing common actions such as
compiling source code and manipulating files. If a project requires a more
specialized task, and a suitable task is not already available in the standard
optional library, it is possible to write your own tasks in Java.
Ant is a subproject of the Apache Jakarta project, part of the Apache Software
Foundation. This is the organization responsible for the open source Apache
Web server, the basis of the IBM HTTP Server shipped with WebSphere
Application Server. The goal of the Jakarta project is to “provide
commercial-quality server solutions based on the Java Platform that are
developed in an open and cooperative fashion.”
To find out more about the Jakarta project visit the Jakarta Web site at:
http://jakarta.apache.org/
This section provides a basic outline of the features and capabilities of Ant. For
complete information you should consult the Ant documentation included in the
Ant distribution or available on the Web at:
http://jakarta.apache.org/ant/manual/
We unpacked the binary distribution file into a temporary directory and copied all
of the files into D:\ant. Following the Ant installation guide we then set the
environment variables ANT_HOME and JAVA_HOME, and updated our PATH. We did
this for the entire machine by selecting Start -> Setting -> Control Panel ->
System -> Environment, and editing the values as shown in Figure 9-13.
We set ANT_HOME to the base location of the Ant software, in our case D:\ant.
JAVA_HOME is used by Ant to determine the location of the JDK used to run the
tool; in our case we decided to use the JDK shipped with WebSphere, which we
have installed in D:\WebSphere\AppServer\java. We also added D:\ant\bin to
the PATH environment variable, so we could pick up the Ant executable.
The final configuration step was to install the optional Ant tasks that are also
available on the Web site in a separate package from the main Ant distribution.
We downloaded the JAR containing the optional tasks and installed it into the
D:\ant\lib directory.
Tip: The script that starts the ant command automatically adds any JAR files
in the lib directory under ANT_HOME to the class path when running Ant.
Built-in tasks
A comprehensive set of built in tasks are supplied with the Ant distribution. The
tasks that we use in this example are described below:
ant Invokes Ant using another build file
copy Copies files and directories
delete Deletes files and directories
echo Outputs messages
jar Creates Java archive files
javac Compiles Java source
javadoc Generates documentation from Java source
mkdir Creates directories
tstamp Sets properties containing date and time information
war Creates WAR files
src
build.xml
init
compile
package
document
clean
The build files are all named build.xml. This is the default name assumed by Ant
if no build file name is supplied. This allows a build of the entire project or a
single subproject to be performed by simply changing to the appropriate directory
and issuing the ant command with no arguments. Each build file has the
following common targets:
init Performs build initialization tasks—all other targets depend upon
this target
compile Compiles Java source into .class files
package Creates the deliverables for each module—depends upon the
compile target
clean Removes all generated files—used to force a full build
The master build file has additional targets that are appropriate for the entire
project, for example to generate documentation from the Java source code.
depends on
compile
init
<?xml version="1.0"?>
</project>
Figure 9-16 shows the outer tags from the master build file. As we progress
through this section of the book we add XML fragments inside the project tags,
building up the file as we go. See “Using the Web material” on page 558 for the
location of the complete master build file.
The solution is to use a hierarchy of property files. Ant is able to read properties
from files that use the format recognized by the Java java.util.Properties
class. Once a property has been set, however, it may not be changed. We first
check the current user’s home directory for a properties file and use any
properties defined there, and then read any remaining properties from a file
called global.properties that is stored with the master build file.
The XML that implements this scheme is shown in Figure 9-17. The user.home
property is provided by Ant and is set to the home directory of the user who
started Ant. On our Windows NT system, for example, this resolves to
C:\WinNT\Profiles\resident. The basedir property is also set by Ant, and
defaults to the location of the build file.
This allows the subproject build files to locate the global properties in the case
where Ant is executed against the subproject build file directly, as well as when
invoked from the master build file.
The initial global properties in global.properties are shown in Figure 9-19. Note
that unlike in regular Java property files we are able to reference other properties
in property values. Ant resolves references at the time the property is set, so
properties must be defined in the correct order.
#
# Global build properties file
#
# If you need to override anything in here create an
# override.properties file in your home directory
#
#
# Software locations
#
global.was.dir=D:/WebSphere/AppServer
#
# Destination directories
#
global.build.dir=${global.dev.dir}/build
global.module.dir=${global.dev.dir}/modules
global.javadoc.dir=${global.dev.dir}/doc/javadoc
It is also possible to override properties on the ant command line using the -D
parameter of the ant command, for example:
ant -Dglobal.was.dir=C:/WebSphere/AppServer
Build targets
The master build file contains a number of build targets; these are our four
standard targets that we define in all our build files, and a number of project-wide
targets that are unique to the master build file.
Initialization target
The first target we describe is the init target. All other targets in the build file
depend upon this target.
In the init target we execute the tstamp task to set up properties that include
timestamp information. These properties are available throughout the whole
build. We also write out a message indicating that the build is starting. The XML
for the init target is shown in Figure 9-21.
<target name="init">
<tstamp/>
<echo>Build of ${ant.project.name} started at ${TSTAMP} on ${TODAY}</echo>
</target>
init:
[echo] Build of itso4ad started at 1211 on May 23 2001
BUILD SUCCESSFUL
Figure 9-22 Output from the master build file init target
The package target in the master build file does not have to depend upon the
compile target. This is because the package targets in all the subproject build
files already depend upon compile in their own build files.
Documentation target
We generate documentation from the source code using javadoc at the project
level, rather than in each subproject. We do this because we want to have a
single common set of documentation for all the code, with a single index and
cross references that work between sub projects.
The document target in the master build file uses the built in javadoc task
provided by Ant. This task has attributes that map to options of the javadoc tool
provided by the Java SDK described in “javadoc options” on page 195. We obtain
the values for many attributes from global properties—we added the properties
shown in Figure 9-25 to the global properties file.
We also use a path to define the list of directories containing source code to be
scanned by the javadoc task. The path is defined near the beginning of the
master build file, after the XML that defines the global properties. The XML is
shown in Figure 9-26. The path may come in useful if we have to add any new
targets that must process all of the source files.
The XML for this target is shown in Figure 9-27. Note how we use the TODAY
property set by the tstamp task to include the date at the bottom of each page
generated. The output generated when we use Ant to generate the javadoc
documentation is shown in Figure 9-28.
Buildfile: build.xml
init:
[echo] Build of itso4ad started at 1356 on July 16 2001
document:
[echo] Generating documentation for itso4ad
[javadoc] Generating Javadoc
[javadoc] Javadoc execution
[javadoc] Loading source files for package itso.was4ad.data...
[javadoc] Loading source files for package itso.was4ad.exception...
[javadoc] Loading source files for package itso.was4ad.helpers...
[javadoc] Loading source files for package itso.was4ad.ejb.account...
[javadoc] Loading source files for package itso.was4ad.ejb.customer...
[javadoc] Loading source files for package itso.was4ad.usecase...
[javadoc] Loading source files for package itso.was4ad.client.swing...
[javadoc] Loading source files for package itso.was4ad.webapp.command...
[javadoc] Loading source files for package itso.was4ad.webapp.controller...
[javadoc] Loading source files for package itso.was4ad.webapp.view...
[javadoc] Constructing Javadoc information...
[javadoc] Building tree for all the packages and classes...
[javadoc] Building index for all the packages and classes...
[javadoc] Building index for all classes...
[echo] Finished generating documentation for itso4ad
BUILD SUCCESSFUL
<?xml version="1.0"?>
<!-- If we were invoked by the master file TSTAMP will be set already -->
<target name="init" unless="TSTAMP">
<tstamp/>
<echo>
Build of ${ant.project.name} started at ${TSTAMP} on ${TODAY}
</echo>
</target>
</project>
The only text in this skeleton that is specific to a subproject is the name attribute of
the outer project tag.
The complete XML for the compile task is shown in Figure 9-30.
The values for the debug, optimize and deprecation attributes are obtained from
global properties. Figure 9-31 shows the information we add to the
global.properties file to support this.
#
# javac settings
#
global.javac.debug=on
global.javac.optimize=off
global.javac.deprecation=on
Figure 9-32 Setting up local properties and class path for the common code
We can test the compilation of the common code in isolation by changing to the
src\common directory and issuing the command:
ant compile
D:\ITSO4AD\dev\src\common>ant compile
Buildfile: build.xml
init:
[echo] Build of common started at 1644 on May 23 2001
compile:
[echo] Compiling common
[mkdir] Created dir: D:\ITSO4AD\dev\src\common\..\..\build\common
[javac] Compiling 12 source files to
D:\ITSO4AD\dev\src\common\..\..\build\common
[echo] Finished compiling common
BUILD SUCCESSFUL
init:
[echo] Build of common started at 1646 on May 23 2001
compile:
[echo] Compiling common
[echo] Finished compiling common
BUILD SUCCESSFUL
Figure 9-34 Compilation output when no source files are out of date
The JAR file must be created in the modules directory. We use the mkdir task
again to make sure the modules directory exists, and then the jar task to create
the JAR file. We specify the manifest information to include using the manifest
attribute. This is illustrated in Figure 9-35.
This task uses a new local property, client.jar.file, in the build file to define
the name of the JAR file to create. We updated the XML that defines the local
properties as shown in Figure 9-36 to include the new property.
Figure 9-36 Updated local properties and path for the common build file
When we use Ant to execute the package target we get output similar to that
shown in Figure 9-37.
D:\ITSO4AD\dev\src\common>ant package
Buildfile: build.xml
init:
[echo] Build of common started at 1701 on May 23 2001
compile:
[echo] Compiling common
[echo] Finished compiling common
package:
[echo] Packaging common
[mkdir] Created dir: D:\ITSO4AD\dev\src\common\..\..\modules
[jar] Building jar:
D:\ITSO4AD\dev\src\common\..\..\modules\piggybank-common.jar
[echo] Finished packaging common
BUILD SUCCESSFUL
Because package depends on compile Ant first makes sure that all the compiled
code is up to date before creating the JAR file.
Figure 9-39 Local properties and path for the EJB code
The main difference from the build file for the common code is that we have
added the common code to the class path, because the EJB code uses classes
from the common code.
We specify the output file name and manifest to use with attributes, and the files
to include in the archive using nested fileset elements. The first nested element
adds the compiled .class files to the archive. The second nested element pulls in
the deployment descriptor files, but excludes the manifest file, which was
specified earlier.
Figure 9-42 shows the output from Ant when it is used to package the EJBs.
init:
[echo] Build of ejb started at 1240 on June 6 2001
compile:
[echo] Compiling ejb
[echo] Finished compiling ejb
package:
[echo] Packaging ejb
[jar] Building jar:
D:\ITSO4AD\dev\src\ejb\..\..\modules\piggybank-ejb.jar
BUILD SUCCESSFUL
While it is quite acceptable to deliver EJBs and generate code in this manner,
you may prefer to generate the deployed code earlier in the cycle, at the point
where we create the EJB JAR file. The main benefits of this approach are:
► Deployment issues caused by problems in the code such as
non-conformance with the EJB specification are highlighted when the code is
built, rather than when it is deployed.
► The code generation is a relatively lengthy process—we can speed up the
development code and unit test cycle if we only generate code when
absolutely necessary, rather than every time we redeploy an EAR file that
contains EJBs.
Figure 9-43 Package and deploy targets for generating a deployed EJB JAR file
We made two new additions to the original package target. The first of these
uses the Ant built-in uptodate task to determine whether or not we need to
regenerate the deployed EJB code.
The second addition to the package target uses the built-in antcall task to invoke
a new deploy target in our build file. This new target executes only if the EJB JAR
file was not already up to date, making the decision based upon the property set
by the uptodate task in the package target.
The deploy target runs the WebSphere ejbdeploy tool on the undeployed JAR
file, generating a new JAR file containing the deployed code in a temporary
directory. It then moves the generated file from the temporary directory into the
modules directory.
The output from this version of the package target is shown in Figure 9-44.
Buildfile: build.xml
init:
[echo] Build of ejb started at 1712 on July 15 2001
compile:
[echo] Compiling ejb
[echo] Finished compiling ejb
package:
[echo] Packaging ejb
[jar] Building jar:
D:\itso4ad\dev\src\ejb\..\..\modules\piggybank-ejb.jar
deploy:
[echo] Deploying EJB JAR file
[exec] 0 Errors, 0 Warnings, 0 Informational Messages
[move] Moving 1 files to D:\itso4ad\dev\src\..\modules
[echo] Finished deploying EJB JAR file
BUILD SUCCESSFUL
Figure 9-44 Output from packaging the EJBs and generating deployed code
The build file is almost identical to that described for the common code in
“Building the common code” on page 210, except that the use case code
requires the common and EJB build directories on the class path in order to
compile. For this reason we do not describe it further here.
Figure 9-46 Local properties and path for the client code
Figure 9-50 Local properties and path for the Web application code
We use the following attributes and nested elements with the war task:
warfile The name of the generated archive file
webxml The location of the Web application deployment descriptor
basedir The base location of the Web content that is to be included in the
archive
manifest The file containing the manifest information to use
Figure 9-53 shows the output generated when Ant is used to package the Web
application WAR file.
D:\ITSO4AD\dev\src\webapp>ant package
Buildfile: build.xml
init:
[echo] Build of webapp started at 1615 on May 24 2001
compile:
[echo] Compiling webapp
[echo] Finished compiling webapp
package:
[echo] Packaging webapp
[war] Building war:
D:\ITSO4AD\dev\src\webapp\..\..\modules\piggybank-webapp.war
[echo] Finished packaging webapp
BUILD SUCCESSFUL
Total time: 1 second
If you examine the build files that come with the sample application code you can
see some of these ideas put into practice. See “Using the Web material” on
page 558.
To gain a better understanding of the standard features available with Ant consult
the Ant documentation that is available on the Web and in the Ant distribution. If
the standard features do not suit your purposes, remember that Ant provides a
mechanism for you to implement your own tasks in Java.
Automatic builds
A common practice in many development environments is the use of daily builds.
These automatic builds are usually initiated in the early hours of the morning by a
scheduling tool such as cron, which is standard on UNIX systems. Similar tools
are available for Windows environments. The daily builds usually attempts to
build a complete system, based upon the latest checked-in versions of the
application source files.
Using Ant you can extend the daily build concept to perform additional tasks as
part of the automatic nightly process. For example, if a nightly build completes
successfully you could then have Ant automatically deploy the latest build into a
test environment and execute all of your unit test cases against it, then e-mail the
results to the appropriate team members.
This section also describes how we use Ant build files to create an enterprise
archive (EAR) file, complete with WebSphere binding information for a specific
environment, that can be installed directly into WebSphere without user
intervention.
Meta-data in WebSphere
We have to consider three categories of meta-data files when developing and
building WebSphere applications. The three categories are:
► J2EE deployment descriptors
► WebSphere deployment information
► Java archive (JAR) manifest information
EJB ejb-jar.xml
WebSphere-specific meta-data files are listed in Table 9-2. The EJB schema and
map files are described in more detail in “Customizing CMP persistence
mapping” on page 420.
Manifest information
When we build our application modules we specify information that we want to
include in the manifest file included in the Java archive (JAR) file. All of the
PiggyBank modules use the JAR manifest to specify the class path to search in
order to find Java classes that the code in the module needs in order to deploy
and run. The contents of the manifest file for the Web application module are
shown in Figure 9-55.
Figure 9-55 Manifest information for the PiggyBank Web application module
The Class-Path entry in the manifest indicates that a class loader should search
the common and use case JAR files for classes that the Web application module
needs. The locations are specified relative to the location from which the JAR
that includes the manifest was loaded. When the PiggyBank application is
packaged into an enterprise archive (EAR) file our application modules are all
placed in the same location, the base directory of the archive file.
The solution is to use AAT to create and edit your deployment descriptors, then
extract and save the descriptors in the source tree. When we want to introduce
new code we can rebuild our modules using the saved descriptors. If we have to
edit the descriptors we simply build the module using the old descriptors, load it
into AAT for editing, then extract the new descriptors from the module file saved
by the tool.
This is the mechanism we used to create all of the deployment descriptors used
by the PiggyBank application described in this chapter.
D:\ITSO4AD\dev\src\webapp>jar xvf
D:\ITSO4AD\dev\modules\piggybank-webapp.war
META-INF WEB-INF/web.xml WEB-INF/ibm-web-bnd.xmi WEB-INF/ibm-web-ext.xmi
created: META-INF/
extracted: META-INF/MANIFEST.MF
extracted: WEB-INF/ibm-web-bnd.xmi
extracted: WEB-INF/ibm-web-ext.xmi
extracted: WEB-INF/web.xml
This target starts AAT using the Ant exec task so we can edit the descriptors.
Unfortunately, AAT does not recognize file names supplied as command-line
parameters, so the assembly tool prompts us to open the appropriate temporary
file.
Once the assembly tool has terminated, Ant takes the modified EAR file and
extracts the meta-data files for all of the modules out of the EAR into a temporary
directory and puts them back into the appropriate locations in the source tree.
The Ant copy task only copies a file over another file if the time stamp on the
destination file is older than that on the source file. If the EAR file was not
modified by AAT the meta-data files are not altered and Ant does not copy them.
<delete dir="${ear.temp.dir}"/>
<echo>EAR meta-data has been copied</echo>
</target>
D:\ITSO4AD\dev\src\ear>ant edit-ear
Buildfile: build.xml
init:
[echo] Build of itso4ad started at 1015 on June 7 2001
edit-ear:
[mkdir] Created dir: D:\temp\edit-ear.20010607.1015
[echo]
Starting the WebSphere Application Assembly Tool (AAT)
When the tool starts, open and edit the file
D:\ITSO4AD\dev\src/../modules/piggybank.ear
Wnen you have finished editing the file close AAT and the
deployment descriptors will be copied back into the source tree.
Figure 9-59 Output from the edit-ear target as it launches the assembly tool
When AAT starts, we copy the name of the temporary EAR file onto the Windows
clipboard, click on the Existing tab in the Welcome dialog, and press Ctrl-V to
paste the name into the dialog (Figure 9-60). We then click OK to load the file
into the tool, and edit the information that goes into the new deployment
descriptors.
BUILD SUCCESSFUL
The -precompileJsp option is set to false to save time when installing the
application—in a unit testing environment we are less likely to require every
single JSP in the application, so the effort spent precompiling them would be
largely wasted.
Sample output from Ant when the install target is invoked is shown in
Figure 9-63. We can safely ignore the warnings because our application does not
use security, and the datasource used by the CMP EJBs has been defined at the
container level, so the individual beans will inherit that.
D:\ITSO4AD\dev\src\ear>ant install
Buildfile: build.xml
init:
[echo] Build of ear started at 1942 on June 6 2001
install:
[echo] Installing EAR file D:\ITSO4AD\dev\src\ear/../../modules/piggybank.ear into WebSphere
AEs
[exec] IBM WebSphere Application Server Release 4, AEs
[exec] J2EE Application Installation Tool, Version 1.0
[exec] Copyright IBM Corp., 1997-2001
[exec]
[exec] The -configFile option was not specified. Using
D:\WebSphere\AppServer\config\server-cfg.xml
[exec] Loading Server Configuration from D:\WebSphere\AppServer\config\server-cfg.xml
[exec] Server Configuration Loaded Successfully
[exec] Loading D:\ITSO4AD\dev\modules\piggybank.ear
[exec] Getting Expansion Directory for EAR File
[exec] Expanding EAR File to D:\WebSphere\AppServer\installedApps\piggybank.ear
[exec] Removed EAR From Server
[exec] Installed EAR On Server
[exec] Validating Application Bindings...
[exec] CHKW4518W: No datasource has been specified for the container managed entity bean ?.
The default datasource specified for the EJB jar will be used.
[exec] CHKW4518W: No datasource has been specified for the container managed entity bean ?.
The default datasource specified for the EJB jar will be used.
[exec] CHKW6505W: A subject (user or group) has not been assigned for security role,
DenyAllRole.
The security role assignment should be made prior to running the application.
[exec] Finished validating Application Bindings.
[exec] Saving EAR File to directory
[exec] Saved EAR File to directory Successfully
[exec] Saving Server Configuration to D:\WebSphere\AppServer\config\server-cfg.xml
[exec] Backing Up Server Configuration to: D:\WebSphere\AppServer\config\server-cfg.xml~
[exec] Save Server Config Successful
[exec] JSP Pre-compile Skipped......
[exec] Installation Completed Successfully
[echo] EAR file D:\ITSO4AD\dev\src\ear/../../modules/piggybank.ear installed
BUILD SUCCESSFUL
There is also support for debugging using the IBM Distributed Debugger (this
topic is covered in Chapter 17, “Debugging the application” on page 467).
Figure 10-1 shows the Studio workbench with the Page Designer.
Project name
Personalization
rules
Stylesheets
For each project, we must setup the properties related to the application server
where we will publish the files (Figure 10-3).
Consider for example that we are developing the Java code in VisualAge and the
Web pages in Studio. Then we can define a publishing stage called VisualAge
with publishing targets aiming at the WebSphere Test Environment folder in the
VisualAge for Java project resources. This way we can test our application in
VisualAge for Java before deploying our Java code out of the repository.
It is possible to define five more status, each of them associated to a color label.
Figure 10-5 shows examples of possible custom status definitions.
When a file is being edited (for example, with Page Designer), Studio marks it as
checked out, and prevents any other developer to access it (except in read-only
mode). Check out is also the label assigned to files under a Version Control
System not accessible for the current user (when that user is not in the group
that can edit the file).
WebSphere Studio stores the checked out files in a default location under the
directory where the product is installed (though it is modifiable):
D:\WebSphere\Studio40\check_out\ProjectName
The files stored at this location are the files currently being edited.
Using tag libraries also means making the debugging process easier: although
VisualAge for Java supports JSP debugging (through the generated servlet
code), and we can use the Distributed Debugger in WebSphere Studio, the task
of debugging a JSP is not as easy as debugging normal Java classes (this topic
is covered in Chapter 17, “Debugging the application” on page 467 in A special
case: how to debug a JSP).
Custom tag libraries also mean reusability: having a tag library defined, we may
be able to reuse it within our application or in other applications.
Let’s go through the process of adding and using a custom tag library for the
PiggyBank application.
A tag library file is an XML file containing information about the tags, basically
their name, associated Java class, descriptive information and attributes. Their
processing will be delegated to these associated classes. To learn more about
tag libraries, see the redbook Programming J2EE APIs with WebSphere
Advanced, SG24-6124.
The tag library and its supporting Java classes are typically developed in
VisualAge for Java and imported into Studio so that the Web developers can add
the tags to the JSP pages and assemble the components together to build the
Web module and publish the files.
The code inserted in the <HEAD> tag of the JSP page is shown in Figure 10-7.
<HTML>
<HEAD>
<META name="GENERATOR" content="IBM WebSphere Page Designer V4.0 for
Windows">
<META http-equiv="Content-Style-Type" content="text/css">
<TITLE>
This is a sample JSP page using a custom tag library
</TITLE>
<LINK href="/theme/Master.css" rel="stylesheet" type="text/css">
<%@ taglib uri="http://jakarta.apache.org/taglibs/utility" prefix="utils" %>
</HEAD>
<BODY>
<!-- custom tags used here -->
</BODY>
</HTML>
Figure 10-7 Including a tag library in a JSP page: the taglib directive
When assembling the Web module in a WAR file for publication in the application
server, we have to include the classes that process our custom tags in the
module, in a JAR file under \WEB-INF\lib. This is in fact the folder where the
utility classes JAR files must be placed (classes used by other components, such
as servlets, or directly by the JSPs in scriptlets).
Note: The WEB-INF directory name must be in upper case in J2EE archive files
To access Studio files from VisualAge for Java, it is not necessary that Studio is
running. To import Java code generated in Studio to VisualAge for Java, we can
use the import SmartGuide in the usual way, or install the WebSphere Studio
Tools so that we can send directly the Java source code to the VisualAge for Java
Workbench from Studio. Select Project -> VisualAge for Java -> Install Studio
Tools in VisualAge and the installation is automatic (it is required to restart
VisualAge for Java for the changes to take effect).
If the main Java development tool is VisualAge for Java, it is a good practice to
have just the class files in the Studio project, while the Java source code only
exists in the VisualAge for Java repository. Also, the Java source code is never
published in Studio.
We can send the Java source files from Studio to VisualAge for Java by selecting
the appropriate file in the workbench, and then selecting Project -> VisualAge for
Java -> Send to VisualAge. On the first operation we are prompted to select the
VisualAge for Java project that is used to import the Java code.
In the same way, we can update Java and/or class files by selecting Project ->
VisualAge for Java -> Update from VisualAge. The files are retrieved from
VisualAge for Java and checked out.
The Studio options in VisualAge for Java are shown in Figure 10-9.
On the first request you are prompted to select the location of the Studio project
by navigating to the appropriate projectname.wao file (the Studio project file). You
can also set the Studio project at any time using the action shown in Figure 10-9.
Note that you retrieve Java source files (and they are imported), but you can only
send class files. If you want to use the interface from VisualAge for Java, then
that is where the master source files are kept. Class files can be sent to Studio
for publishing.
To test the components, if we are not using VisualAge for Java and the
WebSphere Test Environment but other tools such as the J2SDK and Apache
Tomcat, we can anyway proceed as in the WAS case: defining a publishing
target to the server so that we can test the Web application.
This new feature of Version 4.0 can only be used when developing an application
based on the servlet 2.2 and JSP 1.1 specification. These are the only levels
supported by WAS Version 4.0, though, because JSP 1.1 is a superset of JSP
1.0, we can consider that 1.0 JSPs will work in the Version 4.0 environment.
We now show an example of creation and deployment of a WAR file with Studio
4.0 to WebSphere Application Server 4.0, based on the PiggyBank application.
First of all, the properties of the project have to be properly configured to target
the application server version and supported specifications, as we described in
section “Structuring the project in Studio” on page 240.
After configuring the project, the server has to be setup to specify the following
information:
► Server address
► Context root (Web application Web path in case of WAS 3.x)
That can be done selecting the server in the Publishing View, and then Edit ->
Properties -> Publishing.
Before creating the deployment descriptor file, we configure the Web application:
► For every servlet class file added to the module, we specify a servlet
mapping.
We do this by selecting the servlet properties box (right button menu), select
the Publishing tab and type the required information (Figure 10-10).
Specify a servlet
mapping for
each servlet
If we are using the AutoInvoker servlet in WAS, we have to specify Web paths
for every servlet in our Web application (WAS 3.x).
► We include the taglib file under the \WEB-INF folder
► Last, we define the Web path for the Web application by selecting the
Properties right-button menu for the server.
This can be a suitable approach when we want to unit test parts of the Web
application. For example, in an online banking application such as PiggyBank,
we might want to test only the account managing options, so we would create a
publishing stage including only the files related to account management. Then
we would create the descriptor and the WAR file and install the Web module in
the application server for testing.
WebSphere Studio creates this file automatically with the default name of
servername_web.xml. The default location to place it is in the \WEB-INF folder.
Figure 10-12 shows the resulting structure of the project in Studio’s workbench.
Web application
deployment
descriptor
Once the deployment descriptor has been generated, we can generate the WAR
file. This file contains all the components in the current publishing stage. As it is
possible to generate several deployment descriptor files (depending on the
contents of the stage), when we select the Project -> Create Web Archive file, we
have to select the appropriate one for our purposes. In case we have not created
one yet, it is possible to do so at this point. We also select the server name to
which the Web module is going to be installed, and the save location.
The last step is to publish the just created WAR file to the application server,
where it will be installed as a stand alone module or as part of an enterprise
application.
Tip: It is useful to create a war publishing stage to publish the whole Web
module to the desired location in the application server (publishing through
FTP).
Because this is the folder that contains the applications available to be installed
in the server.
In the case of our application, we used the WAR FTP Daemon (a shareware
tool) as an FTP server, and we configured the \installableApps folder with
permissions to read and write. We configured a user, was4ad, that could access
only this folder (for security reasons, in case we are accessing the server via FTP
for other purposes apart from publishing).
Then, after setting up the FTP server, we configured the publication in Studio for
a server with the same name (Figure 10-13).
To publish the WAR file, we select Project -> Publish Web Archive, then select
the appropriate data (Figure 10-14).
Then we can use either the Administrative Console (Advanced Edition and
Single Server Edition) or the SEAppInstall command line tool (Single Server
Edition) to install the application. For details about this process refer to
Chapter 16, “Deploying to the test environment” on page 431.
Example
In this example we use an existing JavaBean (CurrencyBean) and generate a
Web service for it (Figure 10-15).
The wizard displays the methods of the bean for your selection, and finally
displays the files that will be generated (Figure 10-16).
<isd:service xmlns:isd="http://xml.apache.org/xml-soap/deployment"
id="urn:currencybean-service" checkMustUnderstands="false">
<isd:provider type="java" scope="Application" methods="convert">
<isd:java class="CurrencyBean_ServiceService" static="false"/>
</isd:provider>
</isd:service>
<import
location="http://localhost:8080/wsdl/CurrencyBean-interface.wsdl"
namespace="http://www.currencybeanservice.com/CurrencyBean-interface">
</import>
<service
name="CurrencyBean_Service">
<documentation>IBM WSTK 2.0 generated service definition file
</documentation>
<port
binding="CurrencyBean_ServiceBinding"
name="CurrencyBean_ServicePort">
<soap:address location="http://localhost:8080/soap/servlet/rpcrouter"/>
</port>
</service>
</definitions>
The implementation WSDL file must be edited to define the correct URL for the
interface file (<import> tag) and the deployed Web service (<soap:address> tag).
Because the consumption wizard must create a connection to the Web service in
order to create the Java client proxy, you must publish the interface WSDL file
before attempting to use the implementation WSDL file.
The wizard notifies you if you have to modify the JSP to edit the parameters. You
may also have to edit the JSP to handle the output of the Web service.
When structuring the project in VisualAge for Java (as well as when using other
development tools), it is convenient to set up some conventions for packages
and code.
A general convention accepted for package naming is to set it as the URL of the
company (reversed) plus some more detailed domain description. In our case, all
the packages could have been named com.ibm.itso.was4ad.pkgname[.subpkg].
We can use this technique when we have several development teams, each of
them in charge of a piece of the application (one team developing the EJBs,
another team developing the logging utilities.). Setting up different user groups
for each project also helps to maintain clearer boundaries between the teams.
Solution
If we have defined different projects according to the features or components of
the application, it is useful to setup a global Solution for the whole application, as
a means to ease the global version control process. A Solution is a container for
related projects at a certain version. Solutions are defined in the Repository
Explorer window (Figure 11-2).
We can use the standard Javadoc tags, such as @author and @version, and the
two macros predefined by VisualAge: <user> and <timestamp>, which insert the
name of the Workspace owner and the timestamp of the method/type’s creation.
To generate the documentation once the code is completed, select the project or
packages for which we want to create documentation and select Document ->
Generate Javadoc from the context menu. The options available are the same as
with the standard javadoc command of the J2SDK, plus some more options
regarding the aspect of the generated Web pages (header, footer, bottom line
text).
TCP/IP TCP/IP
Clients can connect to the shared repository and explore it to add new projects or
features to their work spaces. A system of permissions can be established so
that only user with administrator’s rights can perform delete operations. Code
changes made in the local work spaces are automatically saved to the repository.
Developing servlets
The supported servlet specification in VisualAge for Java Version 4.0 is 2.2.
There are basically three ways of developing (and testing) servlets with
VisualAge for Java:
► Hand-coding of the servlet code
► Using the Servlet SmartGuide to generate skeleton code
► Importing servlets generated by WebSphere Studio wizards or by other tools
Hand-coding servlets
Experienced Web programmers generally write servlets by hand as subclasses
of the HttpServlet class. In many cases they copy existing servlets as models
and then modify the code.
Servlet SmartGuide
The Servlet SmartGuide can generated skeleton servlet Java code and, if
provided with a JavaBean, also a skeleton HTML input page and a result JSP:
► WIthout using the Import JavaBean option, the SmartGuide only generates
the skeleton code for the basic servlet methods, and it is up to the developer
to complete the code as he/she wishes.
► With the Import JavaBean option, the SmartGuide creates an HTML input
page with a form that includes the specified fields of the bean, as well as a
result JSP that displays the result of the operation performed when submitting
With the Import JavaBean option enabled, the Servlet SmartGuide output is quite
similar to the WebSphere Studio JavaBean wizard, though that wizard provides
more options, for example, the code generation style (servlet or JSP model):
► Servlet model—with this choice, the wizard creates an HTML input page, a
servlet that uses the JavaBean, a JSP that formats the result data, and a
.servlet configuration file.
► JSP model—with this choice the wizard generates an HTML input page and a
JSP that does all the processing.
When working with multiple products you have to have a well-defined process
that specifies how code is shipped between products and where the “master”
code resides.
Developing JSPs
VisualAge for Java 4.0 supports three JSP specifications: 0.91, 1.0 and 1.1,
however, VisualAge for Java is a Java IDE and is not built for developing Web
content (apart from the JSPs generated by the Servlet SmartGuide), but the
functionality provided by the WebSphere Test Environment makes VisualAge for
Java appropriate for testing and debugging purposes.
We develop our JSPs using tools such as WebSphere Studio (or writing the code
from scratch in text editors such as Notepad) and copy them to the WebSphere
Test Environment project resources folder for testing and debugging.
We can also copy JSPs to the Web application’s project resources folder so that
we can perform version control operations of all related code (Java and other).
We also recommend you make the following changes to your 1.0 EJBs, in order
to comply fully with Version 1.1 of the EJB specification:
► EJBs should be modified to use the getCallerPrincipal() and
isCallerinRole(String roleName) methods instead of the deprecated
getCallerIdentity() and isCallerInRole(Identity) methods.
► CMP EJBs should be updated to return the bean’s primary key class from
ejbCreate(...) methods, instead of void as required by the 1.0 specification.
Returning the key class enables the creation of bean-managed beans that are
subclasses of container-managed beans.
► Entity bean finder methods should be updated to define FinderException in
their throws clauses. EJB 1.1 requires that all finders define the
FinderException.
► Enterprise beans should no longer throw java.rmi.RemoteException from the
bean implementation class—this use of the exception is deprecated in EJB
1.1. RemoteException must still be defined in EJB home and remote
interfaces, as required by RMI.
The bean implementation class should throw application exceptions where
required by the business logic. Unrecoverable system-level errors and other
non-business problems should throw a javax.ejb.EJBException; this class
extends java.lang.RuntimeException and does not need to be declared in
the throws clause of a 1.1 EJB.
See “Assembling the application” on page 389 for information about conversion
and deployment of EJBs using the Application Assembly Tool and the EJB
deployment tool, as well as information about CMP persistence mapping in
WebSphere Version 4.0.
While WAS 4.0 supports EJB 1.1, VisualAge for Java 4.0 only supports EJB 1.0.
The specification levels for servlets and JSPs are the same: servlet 2.2 and JSP
1.1. The JSP specification levels 0.91, 1.0, and 1.1 are supported in VisualAge
for Java 4.0 and the WTE can be configured to use any of them—we recommend
however, that you keep the default 1.1 JSP level.
WTE contains the runtime environment for the application server, and it is
intended for unit testing purposes. With this tool, the developer can test his/her
code without exporting it from the VisualAge for Java repository.
Configuration
The WTE allows a single server configuration with multiple Web applications.
The configuration files for the WTE are located in the project resources directory:
D:\IBMVJava\ide\project_resources\IBM WebSphere Test Environment\
To setup the configuration for a Web application, we edit the following file:
...\IBM WebSphere Test Environment\properties\default.servlet_engine
The WTE has one Web application preconfigured, the default application
(default_app).
For example, to add a new Web application to the WTE configuration, we have to
add the following tags to the file (Figure 11-4, see bold lines).
Each of these directories contains a folder named servlets (to place the
webapp_name.webapp file and the servlets’ class files), and a web folder (to place
the JSPs, HTML pages, images).
Here is a list of the main tags we might need to include for our Web applications:
► Error page—to specify the general error page for the application.
► Servlet properties—name, Web path and initial parameters for servlets in the
Web application.
► Invoker servlet—we might want to include data for this servlet, that lets us
load classes by class name (/servlet/pkgname.class).
► JSP specification level—VisualAge for Java supports the three JSP
specifications (0.91, 1.0 and 1.1). The WTE lets us switch the specification
level by selecting the appropriate class name for the servlet that processes
the JSPs (Figure 11-5).
...
<servlet>
<name>jsp</name>
<description>JSP support servlet</description>
<!--
***
*** Replace the JSP compiler with the required specification level.
***
<?xml version="1.0"?>
<webapp>
<name>PiggyBank</name>
<description>PiggyBank Application</description>
<error-page>/error.jsp</error-page>
<servlet>
<name>ControllerServlet</name>
<description>Controller servlet for PiggyBank</description>
<code>itso.was4ad.webapp.controller.ControllerServlet</code>
<servlet-path>*.pbc</servlet-path>
<init-parameter>
<name></name>
<value></value>
</init-parameter>
<autostart>false</autostart>
</servlet>
<servlet>
<name>jsp</name>
<description>JSP support servlet</description>
<!--*** JSP 1.1 Compiler ***-->
<code>com.ibm.ivj.jsp.jasper.runtime.JspDebugServlet</code>
<init-parameter>
<name>workingDir</name>
<value>$server_root$/temp/default_app</value>
</init-parameter>
<init-parameter>
<name>jspemEnabled</name>
<value>true</value>
</init-parameter>
<init-parameter>
<name>scratchdir</name>
<value>$server_root$/temp/JSP1_1/default_app</value>
</init-parameter>
<init-parameter>
<name>keepgenerated</name>
<value>true</value>
</init-parameter>
<autostart>true</autostart>
<servlet-path>*.jsp</servlet-path>
</servlet>
</webapp>
Figure 11-6 Writing the Web application’s configuration file for WTE
The tags within <session-data> are the tags used by the WTE (the rest should
be ignored).
Let’s take now a closer look at each of the components of the WTE.
To run a Web application, we have to set up the class path for the Servlet Engine
(Figure 11-8). We select the projects containing classes used by the servlets we
are going to test. System projects required to run the Servlet Engine (for
example, the IBM WebSphere Test Environment project) are added
automatically.
The Servlet Engine must be started/restarted after any changes to the class path
or other settings are made.
The JSP and HTML pages, as well as the images and other static Web content
are placed in the directory (or subdirectory):
...\IBM WebSphere Test Environment\default_host\webappname\web
To invoke the components in the browser, we use the virtual hosts defined in the
default.servlet_engine file (see an example in Figure 11-4 on page 271):
http://localhost:8080/index.html
If we select the option Display trace messages, the console shows a detailed
output of the configuration parameters and servlets that are loaded from the
*.webapp files. When we select this option, we have to click on Apply and restart
the Servlet Engine for the changes to take effect.
JSP processing
JSPs are translated into a servlet by the JSP processor (for the specification
level that we have selected, see for example Figure 11-5 on page 272). The
default settings for the Servlet Engine make the generated code to be imported in
the VisualAge repository under the project JSP Page Compile Generated Code.
However, the generated code is not imported if there are compilation errors in the
JSP. In this case we get error messages, but it might be difficult to figure out
where the problem is. We can try to run the code in the Scrapbook to get better
understandable error messages, or we can select to Load the generated servlet
externally in the Servlet Engine window, so that the compiled Java classes are
stored in (when using the 1.1 specification level):
...\IBM WebSphere Test Environment\temp\JSP1_1\web_app_name\etc
Then we can import the code manually to inspect and debug it. See more about
debugging JSPs in “A special case: how to debug a JSP” on page 511.
When using the option of loading the generated servlet externally, we can also
select:
► Halt at the beginning of the service method—which acts like a breakpoint set
at the service method of the generated servlet
► Enable JSP source debugging —which brings up the VisualAge for Java
debugger with the JSP source code. We then can step through the JSP
source, but there are some restrictions, for example, we cannot step into Java
code embedded in the JSP.
DataSources and EJBs are bound to a context. The Persistent Name Server
gives access to this context to perform JNDI operations. When an object is bound
to its JNDI name, a description of the object is stored in the database specified in
the PNS properties (Figure 11-9).
The parameters that we configure before starting the Persistent Name Server are
the following:
► Bootstrap port—is the port used to lookup EJB homes and DataSources. The
default value of 900 is also used by the WebSphere Application Server, so
you have to use another port or stop WAS if WAS has been started on the
same machine.
► Database URL—the JDBC URL if a relational database is used for storage.
► Database driver—the JDBC driver used to access the database. The default
driver corresponds to InstantDB (a relational database simulated in files),
which is recommended for simple configurations.
► Database ID and password—used to connect to a real relational database.
► Trace level—specifies the level (high/medium/low) of the trace information
that is displayed through the console.
When the Persistent Name Server is started, it retrieves the list of DataSources
configured previously (if any). To add a new DataSource object, we have to
configure the following parameters (Figure 11-11):
► Name—the name used to perform the lookup (with the prefix jdbc/). In our
example, the JNDI lookup would be through the name jdbc/piggybank.
Note: Because VisualAge for Java includes a WAS Version 3.5.3 runtime,
the WTE only supports the .use of global JNDI names, such as
jdbc/piggybank. The WTE does not support local JNDI references such
as, for example, java:comp/env/jdbc/piggybank.
See “Using JNDI” on page 326 for more information on JNDI in WAS 4.0.
Note: The list of driver classes for datasources differs between the WTE
(based on a WAS 3.5.3 runtime) and WAS Version 4.0.
More details about the usage of this feature are described in Chapter 17,
“Debugging the application” on page 467, in the section “A special case: how to
debug a JSP” on page 511.
Figure 11-13 Export options for EJBs for WebSphere Version 4.0
If we select the first option (EJB JAR), when we add our EJBs to the enterprise
application, it is necessary to convert the 1.0 file to a 1.1 file (WebSphere 4.0 no
longer supports the 1.0 specification). This process is explained in Chapter 15,
“Assembling the application” on page 389 in Creating an EJB module.
The intended reader of this chapter is more interested by the technical aspects of
the frameworks, especially hands-on implementation. It is therefore useful to
Java and Web developers as well as to application designers who are curious to
know how the code to be implemented actually works.
See Chapter 7, “Designing with frameworks” on page 153 for introductions to the
frameworks and information about designing applications to use them.
The Struts version of our Web application completely replaces the Web
application module included in the base version of the example PiggyBank
application—the two versions of the Web module are completely
interchangeable.
We used release Version 1.0b1 of Struts to develop these examples. The binary
and source distributions can be downloaded from the Jakarta Web site:
http://jakarta.apache.org/builds/jakarta-struts/
Note: The final Struts Version 1.0 was released shortly before work on this
book was completed. Due to time constraints, however, we were unable to
test our example code with this final release.
As you read through we advise you to select information from the sections that
follow depending on your environment and the tools available to you.
Struts uses the JAXP APIs in the javax.xml.parsers package, which are not
provided by the standard XML parser in the IBM XML Parser for Java project
included with VisualAge for Java. Later on, when we configure the WebSphere
Test Environment (see “Setting up the WebSphere Test Environment for Struts”
on page 287) we also require a stylesheet processor. We solve both problems by
downloading and importing Xalan, a stylesheet processor from Apache that also
includes the Xerces XML parser, and is available from:
http://xml.apache.org/xalan-j/
Importing the projects was a little tricky, because some of the classes in the
Xerces JAR conflict with classes in the IBM XML Parser for Java project used by
the WebSphere Test Environment.
We then created a separate project for the rest of the Xerces classes and the
Xalan code.
Attention: Any change on any file should be checked in and published from
Studio before being tested in the WebSphere Test Environment. If we do not
publish the files, VisualAge will not pick up the changes. For more information
on Studio checkin and publishing mechanism, see Chapter 10, “Development
using WebSphere Studio” on page 237.
The index.html file displays a HTML login form. To benefit from the Struts input
form facility, this file is changed to a JSP file. Because an HTML file is a particular
case of JSP with no JSP tag, changing its extension to .jsp is initially the only
change we make to the file. We must also remember to update the Web
application’s welcome file list to include index.jsp so that the file name does not
have to be entered explicitly into the browser.
Finally we also import the images and themes directories from the original
PiggyBank application.
This block redirects all the URI starting with /piggybank-struts to the
Piggybank-Struts Web application, which is configured in the configuration file:
D:\IBMVJava\ide\project_resources\IBM Websphere Test Environment\
hosts\default_hosts\piggybank-struts\servlets\piggybank-struts.webapp
The webapp file is very generic and can be generated from the default web.xml
files included in the Struts distribution.
<<uses>>
XSL processor webapp.xsl
piggybank-struts.webapp
Figure 12-2 Converting a web.xml file into a .webapp file using XSL
Make sure the web.xml file to convert is in the C:\temp directory and click OK. In
this case we want to use the example web.xml file shipped with Struts. Check the
console to watch any error messages. An empty console for that program
indicates a successful conversion. The converted piggybank-struts.webapp can
be found in the same C:\temp directory.
To run Xalan outside of VisualAge for Java, we recommend you use Xerces as
an XML parser. It is also possible to (export and) use the IBM XML Parser or any
other JAXP-compliant XML parser. In most versions of the Xalan distribution, a
compatible version of Xerces is included. To run the same command from the
command line copy both xalan.jar and xerces.jar files into the C:\temp\
directory, open a Command Prompt window and type the following command
line:
D:\Websphere\AppServer\java\bin\java
-cp C:\temp\xalan.jar;C:\temp\xerces.jar org.apache.xalan.xslt.Process
-IN C:\temp\web.xml
-xsl "D:\IBMVJava\ide\project_resources\IBM Websphere Test Environment\
properties\webapp.xsl"
-OUT C:\temp\struts-example.webapp
The resulting .webapp file should be put in the corresponding class path, which for
PiggyBank-Struts is:
D:\IBMVJava\ide\project_resources\IBM Websphere Test Environment\
hosts\default_hosts\piggybank-struts\servlets
We must also update the Web application Web content and remove the existing
Web application code from the source tree—the Struts code completely replaces
the basic Web application code.
These entries assume the Struts binary distribution has been extracted into
D:/jakarta-struts.
This change is required to compile the Web application code that uses Struts.
Figure 12-3 Struts updates to the Ant Web application package target
We remove the PiggyBank controller servlet and add the Struts action servlet in
its place:
<servlet>
<servlet-name>action</servlet-name>
<servlet-class>org.apache.struts.action.ActionServlet</servlet-class>
<init-param>
<param-name>application</param-name>
<param-value>PiggyBankResources</param-value>
</init-param>
<init-param>
<param-name>config</param-name>
<param-value>/WEB-INF/struts-config.xml</param-value>
</init-param>
<init-param>
<param-name>debug</param-name>
<param-value>2</param-value>
</init-param>
<init-param>
<param-name>detail</param-name>
<param-value>2</param-value>
</init-param>
<init-param>
<param-name>validate</param-name>
<param-value>true</param-value>
</init-param>
<load-on-startup>2</load-on-startup>
</servlet>
Finally we must add the Struts configuration file for our application. We start by
using the basic struts-config.xml described in “Struts configuration file” below.
We place the configuration file in the WEB-INF directory, the location referenced in
the Struts action servlet initialization parameter.
Every HTML form element can be rewritten using the Struts custom tags library.
Table 12-1 shows some mappings between the normal HTML tags and the Struts
custom tags. The Struts documentation provides a more complete and detailed
list.
</FORM> </html:form>
To let Struts know about this class as a form bean, edit the struts-config.xml
file and add the following declaration inside the <struts-config> tags:
<form-beans>
<form-bean name="logonForm" type="itso.was4ad.action.form.LoginForm"/>
</form-beans>
The association between the form and its form bean is done through the action
specified in the form and the associated action mapping, as explained in the
section that follows. This means that a form bean can be reused in several
similar forms.
For the Struts version of the login there is one possible request from the
index.jsp page. We must create a corresponding action class in Java by
performing these tasks:
1. Create a new package for our action classes—we name the new package
itso.was4ad.webapp.action
2. Create a new class named LoginAction in the new package
3. Create the perform method in the class, as described below
try {
// Use the DisplayCustomer use case to locate the customer info
DisplayCustomer useCase = new DisplayCustomer();
useCase.setCustomerId(((LoginForm) form).getCustomerId());
CustomerData data = (CustomerData) useCase.execute();
We can avoid the long fully-qualified class names by adding these imports to the
class source code:
import org.apache.struts.action.*;
import javax.servlet.http.*;
import java.io.IOException;
import javax.servlet.ServletException;
The method is very similar to the execute method in the LoginCommand class in
the basic PiggyBank application—it simply uses the DisplayCustomer use case
class to locate the customer information and store it in the HTTP session. No
authentication is performed.
<action-mappings>
<action path="/login"
type="itso.was4ad.action.LoginAction"
name="loginForm"
scope="request"
validate="false"
input="/index.jsp">
<forward name="loginSuccessful" path="welcome.jsp"/>
<forward name="loginNotSuccessful" path="index.jsp"/>
</action>
<action-mappings>
When we enable validation this the ActionServlet calls the validate method of
the specified form bean before performing the action. The method can examine
the submitted values and report any validation errors back to the client.
The return object is a collection of all the errors that have been encountered
during the validation process. If it contains more than one ActionError, the
ActionServlet returns a collection of error objects to the input JSP, which can
retrieve and display the errors—for instance above the form—using a very
simple Struts custom tag:
<html:errors/>
The PiggyBank login form includes this tag, enclosed in font tags to display the
errors in red:
<p>
<font color=”red”>
<html:errors/>
</font>
</p>
Figure 12-7 shows the output from the login page displayed when a user enters
an invalid user ID.
The error message displayed in the JSP is resolved using the key
error.login.user given at ActionError object creation time. Let’s explains this
error message resolving process in depth.
Message facility
It is usually a good idea to not hardcode messages from the application in Java
code and JSPs, but to externalize them in plain text files instead. This can bring
several benefits:
► There is no need to recompile every time a message changes.
► There are fewer opportunities to introduce errors in the code accidentally.
► Presentation formatting can be separated from the presentation content.
► Internationalization is more straightforward (see “Internationalization” on
page 299).
► Messages can be reused, leading to better consistency and reduced
translation costs.
To access the messages from the application, the resource file must be
fully-qualified in the class path and referenced from the Web application
configuration file, as a parameter of the ActionServlet configuration:
<servlet>
<servlet-name>action</servlet-name>
<servlet-class>org.apache.struts.action.ActionServlet</servlet-class>
[...]
<init-param>
<param-name>application</param-name>
<param-value>PiggyBankResources</param-value>
</init-param>
[...]
</servlet>
To put this file in the VisualAge for Java class path, click on the Resources tab,
right-click on the project and select Add -> Resource. Browse to the desired
directory and select the file or files you want to add.
When we package the code into a J2EE Web module we must remember to
include the file in the WAR archive in the WEB-INF/classes directory.
If you are using Ant to build your application place the properties file at the base
of the source tree and add the following to the XML build file:
<!-- Pick up the resource files -->
<classes dir="${basedir}">
<include name="**/*.properties"/>
</classes>
To have the custom tag working, the following tag library must be declared in the
JSP:
<%@ taglib uri="/WEB-INF/struts-bean.tld" prefix="bean" %>
Internationalization
In “Message facility” on page 297“, we saw how hardcoded message strings can
be removed from our code and JSPs. This leads us to a new advantage: it
becomes much easier to internationalize the application. Internationalization
(also known as I18N, because there are 18 letters between the I and the N) is the
means by which we can enable our single application for user communities that
understand different human languages.
The steps we must perform for Struts more or less follow the standard Java
internationalization technique, which is clearly explained in the Javasoft tutorial
and API documentation:
http://java.sun.com/j2se/1.3/docs/api/java/util/ResourceBundle.html
http://java.sun.com/docs/books/tutorial/i18n/index.html
Translating messages
Translated messages are put in files named PiggyBankResources_xx.properties
where xx is the ISO-639 language code. A list of the ISO-639 language codes
can be found at:
http://www.ics.uci.edu/pub/ietf/http/related/iso639.txt
For completeness, a list of the ISO-3166 country codes can be found at:
http://userpage.chemie.fu-berlin.de/diverse/doc/ISO_3166.html
Selecting a language
Web applications often place a Choose language option in the main menu, which
can appear during the entire HTML navigation. This can be easily done for our
PiggyBank Struts example in the include.jsp file:
<TD><A HREF="javascript:submitChangeLanguageForm('en', 'US')"><IMG
src="images/en.gif" width="50" height="35" border="0"></A></TD>
Struts conclusions
This chapter has briefly covered only some of the capabilities of the Struts
framework. In addition to the actions described here, the example code
described in Appendix A, “Additional material” on page 557 also implements
logout and account display actions. Beyond this we recommend you examine the
example code and documentation that comes with the Struts distribution to gain
a fuller grasp of the framework’s capabilities.
Despite the limited scope we hope we have given you an insight into the
capabilities of Struts and the ease with which applications can be developed
using the framework.
We first consider that the application has to be written from scratch, so that we
use the least from the framework capabilities. Then, we introduce the use of
additional WSBCC features and so rely on less custom code. At the end, the
application looks the most like a typical front-end application integrated in an
existing enterprise environment.
Create an XML directory in the piggybank-wsbcc directory and insert six files:
► dse.ini
► dseoper.xml
► dsectxt.xml
► dsedata.xml
► dsefmts.xml
► dsesrvc.xml
These files are the WSBCC configuration files. We alternatively suggest not to
start actually from blank files but to use the final sample files provided with the
redbook. This jumpstarts the first tests. All along this chapter, we are going to
show and explain the major concepts behind the code.
WTE setup
To test PiggyBank-WSBCC along with the existing Web applications, the
WebSphere Test Environment has to be configured to support a new Web
application. Edit the file:
D:\IBMVJava\ide\project_resources\IBM Websphere Test Environment\
properties\default.servlet_engine file
and add the following PiggyBank-Struts Web application declaration between the
<websphere-servlet-host name="default_host"> tags:
<websphere-webgroup name="piggybank-wsbcc">
<description>PiggyBank application using WSBCC</description>
<document-root>$approot$</document-root>
<classpath>$approot$</classpath>
<root-uri>/piggybank-wsbcc</root-uri>
<auto-reload enabled="true" polling-interval="3000"/>
<shared-context>false</shared-context>
</websphere-webgroup>
Please note this configuration file is temporary and will be enhanced in the next
sections.
init()
initialize()
Browser service()
initialize method
The initialize method basically calls the necessary framework code to have it
initialized properly from the specified .ini file, the path of which is given as a
parameter:
private void initialize(String iniFileName) throws Exception {
Context.reset();
HandlerRegistry.resetInstance();
// Read data from .ini file
Settings.reset(iniFileName);
Settings.initializeExternalizers(Settings.MEMORY);
// Create the initial context in the server
Context context = new Context("globalContext");
// Initialize the client-server service, required for session management
((CSServerService)context.getService("CSServer")).initiateServer();
}
init method
The init method overrides the standard J2EE API method to initialize a servlet.
It takes the .ini file name from the servlet engine Web application configuration
and calls the initialize method.
In total, this method gets three parameters that usually take these values,
according to the runtime environment (Table 12-2).
The iniFile can have any name and can be put in any place. Just make sure it is
not accessible through the Web. The init method code looks like this:
public void init(ServletConfig sc) {
try {
super.init(sc);
String initStart = getInitParameter("initStart");
if (initStart != null && initStart.equals("false")) {
// Do nothing: the user doesn't want to initialize the
// environment in the Application Server's startup
} else {
// set the HTTP restart preference
this.acceptHttpRestart =
(new Boolean(getInitParameter
("acceptHttpRestart"))).booleanValue();
// Get the path of the server's dse.ini file
String path = getInitParameter("iniFile");
if (path == null) {
path = this.defaultIniFileName;
} else {
this.defaultIniFileName = path;
}
//only try to initialize when the .ini file exists
//otherwise trust initialize() to be called from
//the doGet() method
if (new java.io.File(path).exists()) {
initialize(path);
}
log("StartServerServlet initialized properly");
}
} catch (Exception e) {
log("Exception in StartServerServlet.init(): " + e);
}
}
Default values are provided to lighten the URL while keeping a flexible entry
point. This is especially useful when accessing the servlet through a HTTP GET
method, where the URL parameters have to be encoded.
To have URL parameters encoded for GET method, it very convenient to use the
VisualAge for Java scrapbook:
1. In the VisualAge for Java Workbench menu, select Window -> Scrapbook.
2. Type: java.net.URLEncoder.encode("URL Test");
3. Select all this code by pressing CTRL+A or by selecting Edit -> Select All in
the scrapbook menu.
4. Inspect the result by pressing CTRL+Q or by selecting Edit -> Inspect in the
scrapbook menu.
5. You can now copy and paste the encoded text in the inspector value window
(Figure 12-11).
Configuration
The servlet we just built has to be declared in the Web application configuration:
<servlet>
<name>startServerServlet</name>
<code>itso.was4ad.wsbcc.StartServerServlet</code>
<autostart>true</autostart>
<servlet-path>/startServer</servlet-path>
<init-parameter>
<name>acceptHttpRestart</name> <value>true</value>
</init-parameter>
<init-parameter>
<name>initStart</name> <value>true</value>
</init-parameter>
<init-parameter>
<name>iniFile</name>
<value>D:\\IBMVJava\\ide\\project_resources\\
IBM WebSphere Test Environment\\hosts\\default_host\\
piggybank-wsbcc\\XML\\dse.ini</value>
</init-parameter>
</servlet>
The URL to restart the WSBCC server with default values is therefore:
http://localhost:8080/piggybank-wsbcc/startServer
DSEServerOperation
CommandOperation
- replyPage
The actual forward to the reply page will be performed by WSBCC as long as the
dse_replyPage attribute is set to the corresponding value in the standard list of
the HTTP request attribute. This can be done in the execute method, which has
to be overridden by any user-defined operation:
public void execute() throws Exception {
super.execute();
setValueAt(com.ibm.dse.cs.html.HtmlConstants.REPLYPAGE,
this.replyPage);
}
Note that the dse_replyPage parameter name is not actually hardcoded but is
referenced instead by the REPLYPAGE constant in the HtmlConstants framework
class.
This declares the WSBCC ID of the operation, the class that implements it, the
operation context that is available to it and the reply page that should be sent
back to the browser when the operation flow is finished.
Do the same for all the operations and include everything between
<dseoper.xml> tags in the dseoper.xml file. Check as usual the standard <?xml
version="1.0"?> XML starting tag is present.
This copies the XML attribute to the Java attribute every time such an operation
object is created by WSBCC. This functionality is automatically inherited by all
the child operation classes, which can then externalize their inherited replyPage
field. An obvious advantage here is that navigation and page naming is controlled
in XML files instead of to-be-compiled Java code.
In our example, the login operation will put the customer information into the
session context like the initial load did for PiggyBank.
Table 12-3 and Table 12-4 show some typical (invented) host service
documentation specifying input and output message formats.
The request strings are typically formed by juxtaposing padded values like this:
00001~~~~~101
Response strings coming back from a host system are usually marked by
delimiter characters separating the values (Table 12-5 and Table 12-6).
Table 12-5 customerInfo answer (single occurrence)
Field Delimiter Value
customerId \ 101\
number \ 1\
balance \ 3200.00\
In “Defining formats” on page 320 we explain how WSBCC can easily handle
these formats and many others.
WSBCC-enabled
application server
Browser Host
PiggyBank-WSBCC
Behind the scenes (Figure 12-14), the simulated host system will actually provide
the required service using the working PiggyBank application we wrote in the
previous chapters.
PiggyBank-WSBCC "Host"
Browser PiggyBank DB
Straight PiggyBank path
We consider that this service is shared by the entire application. So we put it the
global context in the dsectxt.xml file:
<context id="globalContext" type="Global" parent="nil">
...
<refService refId="hostSystem" alias="hostSystem" type="host"/>
...
</context>
In “Dealing with contexts” on page 318 we explain that this creates only one
instance of the itso.was4ad.wsbcc.HostSystem class in the unique global
context. Therefore its execute method must be thread-safe, which can easily be
achieved by using the standard Java single-semaphore facility: declare the
method as synchronized.
Extra externalization code has to be provided for the serviceId attribute in the
initializeFrom method, similar to the code written for the replyPage attribute:
public Object initializeFrom(Tag aTag)
throws java.io.IOException, DSEException {
super.initializeFrom(aTag);
com.ibm.dse.base.Vector attributes = aTag.getAttrList();
for (int i = 0; i < attributes.size(); i++) {
TagAttribute attribute = (TagAttribute) attributes.elementAt(i);
if (attribute.getName().equals("serviceId")) {
this.serviceId = (String) attribute.getValue();
}
}
return this;
}
For completeness, here is a list of the services supported by the host simulator:
Actually, the term context speaks for itself. In WSBCC, a context is modeled as a
group of data elements, which are illustrated in Figure 7-8 on page 172.
Figure 7-6 on page 169 shows that a context is shared all along the framework
flow and is available to all of its components.
To put and retrieve data in a context, get and set methods are provided by
WSBCC (getValueAt and setValueAt in the com.ibm.dse.base.Context class).
Figure 7-7 on page 171 and Figure 12-15 illustrate an important property of the
WSBCC contexts: they can be chained.
Figure 12-15 shows that any unsatisfied get/set method call on a context is
passed to the parent context, an so on, up to the root context (having a nil
parent).
To avoid exceptions when a data name to have its value set is not found in the
hierarchy, WSBCC provides a dynamic facility on the contexts, passing the set
call back down, down to the leaf context, until a dynamic KeyedCollection is
found, creating a new appropriate data element in it to hold the value. This
feature should be used wisely to avoid growing global (unique to the application)
and session (unique to a user session) contexts. Practically, we recommend to
provide a generic dynamic KeyedCollection to the operation contexts and to
leave the upper contexts non dynamic.
<?xml version="1.0"?>
<dsedata.xml>
<kColl id="globalData" dynamic="false">
</kColl>
<kColl id="customerData" dynamic="false">
<field id="customerId" />
<field id="customerName" />
</kColl>
<kColl id="genericDynamicData" dynamic="true">
</kColl>
</dsedata.xml>
Defining formats
WSBCC provides a very complete and customizable set of components to
externalize the context format and unformat processes that are shown in
Figure 12-16.
format
Context String
unformat
Name Value
serviceId 00002
customerId 1
The id attribute in the fmtDef tag is used to be referenced for instance from the
dseoper.xml file as described in “Generic WSBCC operations” on page 316.
The two instances we have used and many others are described in the WSBCC
product documentation. Here their respective uses are obvious in the format
process. In the unformat process, the decorator is used to delimit the part of the
string to be unformatted into the decorated data reference.
...
<fixedLength length="5">
► The delimiter decorator, which set the string cut before a specified character
(Figure 12-18).
A B C D \ ...
<delim delimChar="\">
The link between the JSP and the framework is done through the use of a
so-called utb bean. WSBCC provides a default bean:
com.ibm.dse.cs.html.DSEJspContextServices
We recommend to extend the default bean (Figure 12-19) to have three basic
features to access the context without having any WSBCC classes knowledge in
the JSP:
► Get a string value from a KeyedCollection, giving a data name
► Get a string value from an IndexedCollection, giving an index and a data
name
► Get the size of an IndexedCollection
DSEJspContextServices
PiggyBankJspContextServices
Before such a bean can be used in a JSP, it must be initialized with the standard
JSP request variable representing the HTTP request object, where WSBCC
actually stores the necessary information and can retrieve it through a call to the
default utb.initialize method.
Attention: As in any JSP, special care must be taken about catching possible
exceptions, which break the output process and call the Servlet Engine error
reporter:
► The default WebSphere error reporter displays the stack trace so this
should be really avoided in a production environment.
► Although it is simply possible to write a less verbose error reporter, another
possible solution is to add a standard JSP tag to specify an error page; that
works as long as the JSP output is not flushed.
► Keep in mind that the <jsp:include> directive always flushes the output.
Obtaining an InitialContext
The J2EE specification recommends that code accessing JNDI obtain a
reference to a JNDI InitialContext object using the default constructor, with no
arguments. Furthermore, the specification requires that the container provide an
environment in which a valid InitialContext will be obtained by using the
default constructor. See Section 6.9 “Java Naming and Directory Interface (JNDI)
1.2 Requirements” in the Java 2 Platform Enterprise Edition Specification, V1.2;
and Section 18.2.1.3 “JNDI 1.2 requirements” in the Enterprise JavaBeans
Specification, V1.1.
WAS Version 4.0 and Version 3.5 both fulfil this requirement, meaning that an
InitialContext can be simply obtained using the code:
import javax.naming.InitialContext;
...
InitialContext context = new InitialContext();
This code will function correctly in EJBs, servlets, JSPs and application clients
running in the appropriate WebSphere container.
Earlier versions of WebSphere required you to specify the initial context factory
class and the naming service provider URL in a Hashtable or Properties object
to the InitialContext constructor. This approach will still work, however you
should be aware that the factory class has changed in Version 4.0:
com.ibm.ejs.ns.jndi.CNInitialContextFactory <=== Version 2/3
com.ibm.websphere.naming.WsnInitialContextFactory <=== Version 4
The factory class supported by earlier versions of WebSphere is still provided for
backwards compatibility with old code, however its use has been
deprecated—the internal implementation of the earlier factory class simply uses
the new factory.
Version 4.0 of WAS, in line with the J2EE specification, introduces the concept of
a local JNDI namespace. Each Web application, client application and individual
EJB has its own local JNDI namespace that the component accesses by
performing lookups with names that begin java:comp/env.
When a J2EE module is created each component must define in its deployment
descriptor all the resources that it expects to find in the local JNDI namespace.
These resources may include EJB homes, data sources, mail providers, and
general configuration information the component expects to find in its
environment. We can enter this information manually into the deployment
descriptor XML, or use the WebSphere Application Assembly Tool (AAT) GUI.
At runtime, when a client performs a JNDI lookup in its local JNDI namespace,
the container uses the information supplied when the application was installed to
map the local name understood by the module to the global name which
identifies where the component is actually located. This is illustrated in
Figure 13-2, where the two client modules locate the same EJB home using two
different local JNDI names.
lookup("java:comp/env/ejb/Account") lookup("java:comp/env/ejb/PiggyBank/Account")
This extra level of indirection at the JNDI level makes it much easier to assemble
applications from components created by multiple providers, because it
eliminates naming conflicts between components and allows each component’s
resources to be configured in a consistent manner without having to know
anything about the component’s internal implementation.
This feature is also useful for allowing multiple versions of the same components
to run independently in the same WebSphere cluster—we describe in detail how
to manage this in “Managing application versions” on page 371.
With the introduction of Version 4.0 of the application server, we find ourself in a
position where we must re-examine this best practice and, in certain
circumstances at least, caution against its use.
First, let us consider the caching singleton helper class. With the introduction of a
local JNDI namespace, just as multiple components may use different local JNDI
names to refer to the same resource, two components may also use the same
local JNDI name to refer to different resources. Under these circumstances use
of a caching singleton to perform the JNDI lookups will result in a race condition,
whereby only the first component to perform the lookup will obtain the correct
resource. This situation will clearly result in application failures. We strongly
caution you against making such an error.
Message logging
In this section we outline the reasons for establishing a coherent strategy for
managing messages logged by an application. We use the PiggyBank
application code to demonstrate how flexible logging capabilities can be
incorporated into an application, leveraging two existing logging systems; the
WebSphere application server trace facility, and Log4J, an open source logging
mechanism from the Apache Jakarta project.
In our experience, the limitations of not having a strategy are usually not exposed
until an application is being handed over from development to the team that will
administer it in production, which far too late to effect major changes. In the early
stages of the hand over, administrators investigating problems find themselves
searching through different files and trying to understand whether a message
they find is relevant or not to the problem they are trying to solve. Eventually they
call for assistance from a developer, who then decides to insert some more
messages in order to narrow down the problem. When the new build with the
extra messages arrives, the cycle starts again.
The complete code for both implementations of our log wrapper class can be
found in “Using the Web material” on page 558.
You may decide this set of types is too limited for your own project. Most logging
frameworks provide a larger number of types, and some allow you define your
own. Some frameworks also define specific types for particular event classes,
such as entry and exit from methods.
Whatever conclusions you come to, take care not to overburden developers by
forcing them to insert too many logging statements into the code. Make sure that
for any event the appropriate choice of message type is clearly documented and
understood by all developers. This is essential to having consistent logging
behavior in your application.
For each of our message types our wrapper class defines two methods—one
that accepts a single Object parameter, and another that takes two
parameters—an Object and an Exception.
► The Object parameter will be converted to a String when the message is
logged using its toString method. Object is used rather than String to allow
the expense of string conversion to be delayed until absolutely necessary.
► The Exception parameter, if supplied, causes the stack trace from an
exception to be included in the log output. This is a relatively expensive
operation, so our policy encourages it be used carefully.
Some messages may require significant overhead before they are submitted to
the log wrapper, for example, if a developer wants to log the contents of a
data-only object. The wrapper provides methods that allow developers to check
whether logging is enabled for the debug and information message types, the two
types that will generate the most input and are also likely to be disabled in a
production system.
package itso.was4ad.helpers;
public class LogHelper {
/**
* Creates a new LogHelper instance for a component
*/
public LogHelper(Class component) {}
/**
* Logs a debug message
*/
public void debug(Object o) {}
/**
* Logs a debug message including stack trace from an exception
*/
public void debug(Object o, Exception e) {}
/**
* Logs an error message
*/
public void error(Object o) {}
/**
* Logs an error message including stack trace from an exception
*/
public void error(Object o, Exception e) {}
/**
* Logs an informational message
*/
public void info(Object o) {}
/**
* Logs an informational message including stack trace from an exception
*/
public void info(Object o, Exception e) {}
/**
* Logs a warning message
*/
public void warn(Object o) {}
/**
* Logs a warning message including stack trace from an exception
*/
public void warn(Object o, Exception e) {}
/**
* Returns true if debug level logging is enabled for this component
*/
public boolean isDebugEnabled() {}
/**
* Returns true if info level logging is enabled for this component
*/
public boolean isInfoEnabled() {}
}
We recommend that you choose your logging API and policy before you develop
any other code—even if you just start with an empty or trivial implementation, it is
much easier if you log consistently from the start.
We then create the wrapper instance and save it in a static member. The
following example is taken from the AccountBean class that implements the
PiggyBank Account EJB:
private static final LogHelper LOG = new LogHelper(AccountBean.class);
LOG can be declared as final since it will not be altered once set.
Logging messages
Application code uses the static LOG object to log messages. Figure 13-4 shows
how the code for the transfer method of the PiggyBank AccountManager EJB
makes use of the low wrapper to log various messages.
The very first debug message in the method builds a message that reports the
message signature, complete with parameters. Because this involves string
concatenation, which is a relatively expensive operation, we check to see if
debug messages are enabled before building the message.
The other debug messages are simple strings that will be optimized to constants
by the compiler, so we can rely on the code within the logging framework to
check the logging level for us.
if (LOG.isDebugEnabled()) {
LOG.debug("transfer(" + debitID + ", " + creditID + ", " + amount + ")");
}
try {
// Locate the accounts
LOG.debug("Looking up home");
AccountHome home =
(AccountHome) HomeHelper.getHome(ACCOUNT_HOME, AccountHome.class);
LOG.debug("Locating debit account");
AccountKey key = new AccountKey(debitID);
Account debitAccount = home.findByPrimaryKey(key);
LOG.debug("Locating credit account");
key = new AccountKey(creditID);
Account creditAccount = home.findByPrimaryKey(key);
If you are using VisualAge for Java, you can quickly speed up the process of
inserting logging code by defining macros. VisualAge for Java macros allow you
define arbitrary text that can be inserted into source code using the code assist
feature that is activated by pressing Ctrl-Space.
To create a new macro in VisualAge select Window -> Options, then in the
Options window expand Coding and select Macros. Figure 13-5 shows the
VisualAge for Java Options window being used to edit a macro called init that
inserts code that initializes logging for a component.
To insert this macro into your code, type init then press Ctrl-Space to open the
code completion window (Figure 13-6).
Figure 13-6 Using code completion to insert a macro in VisualAge for Java
When you select the macro from the top of the list the macro code is inserted and
the cursor is moved to the appropriate location to enter the new class name
(Figure 13-7).
In addition to the init macro, we also created macros for each of the message
types, and an ifdebug macro that inserts the code to log a debug message only if
debug messages are enabled. The code for the ifdebug macro looks like this:
if (LOG.isDebugEnabled()) {
LOG.debug("<|>");
}
Other frameworks you may want to investigate include the Logging Toolkit for
Java (also known as JLog) from IBM, and Trace.Java, another open source
package.
The Logging Toolkit for Java is available from the IBM AlphaWorks web site:
http://alphaworks.ibm.com/
You should also be aware that there are plans to introduce logging functionality
into the core Java 2 API. The proposed API is described in Java Specification
Request (JSR) 47. At the time of writing this specification has completed the
public review stage and looks likely to be included in Version 1.4 of the Java 2
SDK. The API will be implemented in the java.util.logging package. For up to
date information on the status of this specification request check the Java
Community Process Web site at:
http://jcp.org/
If you really do find that none of the frameworks meets your requirements as-is, it
will probably be easier to implement your needs as an extension to an existing
framework such as Log4J. If you feel inclined to do so you can then submit your
extension to the community and have other developers help maintain it with you.
The JRas facility is powerful and flexible, and uses a ring buffer that can be
dumped on command to store the most recently recorded events. The level of
tracing and the components to be traced can be specified either when a process
starts, or dynamically using the command line or GUI tools shipped with
WebSphere. JRas also provides internationalization support for logged
messages—the static text for messages are stored in text files that can be
translated into different languages and the correct language version chosen at
runtime.
To locate the JRas documentation navigate to the section “Using the JRas
Message Logging and Trace Facility” in the master table of contents. The
InfoCenter also includes JRas documentation in PDF format, and API
documentation generated by the javadoc tool.
The decisive advantage that would lead you to use WebSphere trace to log
application events is that messages from the application are fully integrated with
messages from the application server. You use the same tools to control trace
information from both sources, and the messages are collated in the same place.
This integration can also assist with debugging, particularly if you are concerned
about how your application interacts with the application server, because
messages from your application and WebSphere are interleaved in the same
location, in the correct sequence and using the same message format.
The first package contains the classes and interfaces that our logging code will
use to log messages. The second package contains the singleton class
com.ibm.websphere.ras.Manager. We use this class to create instances of the
WebSphere-specific JRas implementation classes.
JRas provides a logging class for each of these two categories. Our debug
messages fall into the JRas trace category, and our other message types into the
message category, so our log wrapper has to manage an instance of the JRas
trace logger for each of our components as well as an instance of the message
logger class.
JRas supports a large range of different trace message types defined in the
RASITraceEvent interface, whereas our simple log wrapper has only one. We
map all our debug messages to the JRas message type TYPE_MISC_DATA.
The JRas message logger on the other hand defines three types of messages in
the RASIMessageEvent interface—TYPE_INFO, TYPE_WARN and TYPE_ERR. These
types can be mapped directly to the info, warning and error types provided by our
log wrapper.
import com.ibm.ras.*;
/**
* This class provides a log and trace facility to the PiggyBank
* application. It is implemented as a wrapper around the
* WebSphere JRas facility, to enable the underlying logging
* framework to be changed without rewriting application code.
*/
public class LogHelper {
// Instance variables
private RASMessageLogger ml = null;
private RASTraceLogger tl = null;
private String className = null;
private String packageName = null;
// Statics
private static boolean initialized = false;
private static com.ibm.websphere.ras.Manager manager = null;
// Constants
private static final String ORGANIZATION = "PiggyBank Corporation";
private static final String PRODUCT = "PiggyBank Application";
private static final String DEFAULT_PACKAGE = "Default package";
}
The class defines two static variables; the first is a flag to indicate whether the
logging system has been initialized, the second is a reference to the WebSphere
singleton Manager class that we use to create JRas message and trace loggers.
Finally, we declare some string constants that are used when we register
components with JRas.
First of all we make sure that we have initialized the log wrapper correctly. We
then use the WebSphere JRas manager singleton to create a message and a
trace logger for our component. We use constants to define the organization and
product names for the loggers, and set the loggers’ component name, specified
in the third parameter, to the package name of the class. The complete code for
the constructor is shown in Figure 13-9.
Initializing JRas
Our log wrapper code performs initialization steps in the init method
(Figure 13-10).
/**
* Initialize the JRas logging system
*/
private static synchronized void init() {
// Safeguard against race condition
if (!initialized) {
// Get a reference to the manager singleton
manager = com.ibm.websphere.ras.Manager.getManager();
The JRas version of the wrapper simply obtains and saves a reference to the
WebSphere JRas Manager class.
/**
* Logs a debug message
* @param o java.lang.Object The message to be written to the log
*/
public void debug(Object o) {
if (isDebugEnabled()) {
tl.trace(RASITraceEvent.TYPE_MISC_DATA, className, getCallingMethod(),
o.toString());
}
}
There is an alternative method that allows the object logging the message to be
specified rather than the class name—this would be useful during debugging in
order to identify on which instance of a class a method is being invoked. Due to
the architecture of our log wrapper we do not have this information available,
however—if we had designed our log wrapper with JRas in mind, we may have
chosen to include this information as a parameter.
This method parses the stack trace generated by creating a new instance of the
Throwable class in order to determine the calling method’s name. This is rather
clumsy and inefficient, so we wrap the entire trace call in an if statement that
determines whether debug messages are enabled.
Figure 13-12 Determining the name of the method logging the message
The code for the other three simple logging methods is basically the same. Each
method invokes a textMessage method on the JRas RASMessageLogger managed
by the log wrapper, specifying the message type according to the mappings we
described earlier.
The code for the info method is shown in Figure 13-13; the warn and error
methods follow the same pattern, except that they do not check to see if the
message type is enabled.
/**
* Logs an informational message
* @param o java.lang.Object The message to be written to the log
*/
public void info(Object o) {
if (isInfoEnabled()) {
ml.textMessage(RASIMessageEvent.TYPE_INFO, className, getCallingMethod(),
o.toString());
}
}
/**
* Logs an informational message including stack trace from an exception
* @param o java.lang.Object The message to be written to the log
* @param e java.lang.Exception The exception
*/
public void info(Object o, Exception e) {
if (isInfoEnabled()) {
ml.textMessage(RASIMessageEvent.TYPE_INFO, className, getCallingMethod(),
o.toString());
ml.exception(RASIMessageEvent.TYPE_INFO, className, getCallingMethod(), e);
}
}
/**
* Returns true if debug level logging is enabled for this component
* @return boolean
*/
public boolean isDebugEnabled() {
return tl.isLoggable(RASITraceEvent.TYPE_MISC_DATA);
}
/**
* Returns true if info level logging is enabled for this component
* @return boolean
*/
public boolean isInfoEnabled() {
return ml.isLoggable(RASIMessageEvent.TYPE_INFO);
}
If you are developing code using VisualAge for Java you will find that the JRas
code is already present in your workspace if you add the EJB development
environment feature, so no further effort is required to build your code. If you are
building your code outside of VisualAge, using the Java SDK or another IDE, you
have to make sure that ras.jar is in the class path when you compile the log
wrapper code.
Because the trace facility is used by the WebSphere runtime the relevant classes
are already available during deployment and at runtime, so you do not have to
include the ras.jar file in any other class path.
Figure 13-16 shows part of the console window displaying messages logged
using the info method of our class by a PiggyBank servlet on initialization.
To do this open the Trace dialog by selecting the appropriate application server in
the console’s tree view and selecting Trace from the pop-up menu
(Figure 13-17).
Figure 13-17 Opening the trace dialog for a running application server
The Trace dialog lists the application components alongside the WebSphere
components. You can select an individual component or part or all of the
component hierarchy and modify the message types that will be logged
(Figure 13-18).
Color coded boxes in the dialog indicate components for which message types
have been activated. Once you have chosen the components and the message
types you want them to log, click the OK button to enable the new settings.
The Trace dialog also allows you manage the application server’s ring buffer,
which maintains a circular log of the most recently recorded messages in
memory:
► To modify the size of the ring buffer enter the new size and click OK
► To dump the current contents of the ring buffer to a file, enter the file name
and click Dump
Figure 13-19 shows an extract from a dumped ring buffer that shows debug
messages logged by the PiggyBank code when the transfer method of the
AccountManager EJB is invoked with an invalid account number.
Figure 13-19 Application debug messages from the WebSphere ring buffer
The warning and audit messages from this trace are also logged to the
administrator’s console window (Figure 13-20).
To do this open the Trace Service dialog (Figure 13-21). Select the application
server in the console’s tree view, then select Properties from the pop-up menu. In
the application server Properties dialog select the Services tab, then select the
Trace Service, and click the Edit Properties button.
Figure 13-21 shows the Trace Service dialog being used to edit the initial trace
settings for the PiggyBank application server—debug messages from the entire
application are to be sent to the file D:\temp\debug.txt. The format of the trace
specification is described in full in the WebSphere documentation. To confirm the
changes click the OK button.
Use the WebSphere DrAdmin command to administer WebSphere trace from the
command line. This command communicates directly with a thread that runs in
each WebSphere process that is dedicated to servicing the trace facility. The
various command options are shown in Figure 13-22.
Options:
-help [Show this help message]
-serverHost <Server host name>
-server <Server name>
-defaultConfiguration [Use default configuration file]
-configurationFile <configuration file>
-serverPort <Server port number>
-testConnection [Test Connection]
-testVersions [Test Connection and Versions]
-retrieveTrace [Retrieve the trace specification]
-retrieveComponents [Retrieve the trace components]
-setTrace <Trace specification>
-setRingBufferSize <Number of ring buffer entries, in K>
-dumpRingBuffer <Dump file> [default]
-dumpState <Dump string>
-dumpThreads
-dumpConfig (all | server)
-stopServer
-stopNode
The DrAdmin command has to know the number of the TCP/IP port that the
server thread is listening on. This port number is written to the standard output of
each WebSphere application server process soon after it starts. An example of
this message is shown below:
[01.06.05 14:05:26:925 PDT] 3609a403 DrAdminServer I WSVR0053I: DrAdmin
available on port 1225
You may either specify the port number to DrAdmin using the -serverPort option,
or, if you are using the single server edition of WebSphere, you can specify the
configuration file which defines the port number for the application server
process to use. The -defaultConfiguration flag tells the command to use the
default configuration file.
If, for example, we want to enable all messages for our PiggyBank application in
the server which logged the port number in the previous example, we issue the
following command:
DrAdmin -serverPort 1225 -setTrace itso.*=all=enabled
The ring buffer is dumped into the ring.txt file in the working directory of the
application server process.
Using Log4J
Log4J ia a subproject of the Apache Jakarta project, the same organization
responsible for the Ant tool described in “Using Ant to build a WebSphere
application” on page 197. Like Ant, Log4J is an open source tool that is
maintained by a community of developers from many separate organizations.
In this section we describe only a very small subset of the features available in
Log4J. For more information you should consult the Log4J Web site:
http://jakarta.apache.org/log4j/
The Log4J framework has been designed with performance and flexibility in
mind. It comes with many standard extensions, and is easily extensible through
the use of user written extensions which can be plugged in to the base
framework.
Installing Log4J
We downloaded Version 1.1.1 of the Log4J distribution from the Log4J Web site.
The download page is located at:
http://jakarta.apache.org/log4j/docs/download.html
We extracted the contents of the archive file onto our D: drive, creating a new
directory, D:\jakarta-log4j-1.1.1.
Because our simplified log wrapper supports only four message types, we map
PiggyBank debug, warning, information and error messages to Log4J DEBUG,
INFO, WARNING and ERROR messages, and disregard the Log4J FATAL priority.
// Instance variables
private Category category = null;
}
The constructor first checks to see if Log4J has been initialized, and initializes it if
necessary. The code then creates a new Log4J Category object that manages
logging for the component. The code for the constructor is shown in
Figure 13-24.
/**
* Creates a new LogHelper instance for a component
*/
public LogHelper(Class component) {
super();
/**
* Initialize the underlying logging system that this class wraps
*/
private static synchronized void init() {
// Safeguard against possible race condition
if (!initialized) {
// Use a Log4J PropertyConfigurator to load logging information from
// a properties file. configureAndWatch() will start a thread to
// check the properties file every 60 seconds to see if it has changed
// and reload the configuration if necessary.
PropertyConfigurator.configureAndWatch("log4j.properties", 60000);
First we check to see if another wrapper class already initialized Log4J while we
were waiting to enter the synchronized init method. If not, we use the Log4J
PropertyConfigurator class to configure Log4J based on information in the file
log4j.properties.
The configureAndWatch method searches for the file on the class path. It then
checks every 60 seconds to see if the file has been modified, and reload the
configuration if necessary. This rather crude mechanism allows the logging
configuration to be altered dynamically in a running server.
Note: The PropertyConfigurator class is not the only way to manage the
runtime configuration of Log4J. More sophisticated methods are described in
the Log4J documentation.
The message object passed to the wrapper method is passed directly to the
wrapped Log4J component. The Log4J framework only invokes toString on the
message if logging is enabled for the message type.
/**
* Logs an informational message including stack trace from an exception
* @param o java.lang.Object The message to be written to the log
* @param e java.lang.Exception The exception
*/
public void info(Object o, Exception e) {
category.info(o, e);
}
The Log4J Category object that we store in our wrapper class provides methods
that provide this functionality and these methods simply delegate the request to
the wrapped object. The code for the two methods is shown in Figure 13-28.
/**
* Returns true if info level logging is enabled for this component
* @return boolean
*/
public boolean isInfoEnabled() {
return category.isInfoEnabled();
}
If you are developing code using VisualAge for Java you have to import at least
the core API into your workspace in order to compile the wrapper class. You may
find it convenient to import the Log4J source from the Log4J src\java directory
into your workspace—this will enable you to step through the Log4J code in the
VisualAge for Java debugger.
If you are building your code outside of VisualAge for Java , using the Java SDK
or another IDE, you have to make sure that the log4J.jar file is in the class path
when you compile the log wrapper code.
For deployment and runtime, we found it easiest to package the Log4J archive in
the J2EE enterprise archive (EAR) file along with our application components.
We then included log4j.jar in the class path entry of the manifest of the JAR file
containing the log wrapper class.
Note: Although there can be only one instance of a Log4J Category class for a
given component, multiple class loaders in the same process may load
separate instances of the class. Because WebSphere can use different class
loaders to load different components, there are circumstances where the
logging of a component may behave unexpectedly. For more information on
the WebSphere class loaders consult the WebSphere documentation in the
InfoCenter.
#
# Set root category priority to WARN and its only appender to FILE.
#
log4j.rootCategory=WARN,FILE
#
# Set the redbook EJB code priority to DEBUG
#
log4j.category.itso.was4ad.ejb=DEBUG
#
# FILE is a FileAppender that appends to D:\temp\trace.log
#
log4j.appender.FILE=org.apache.log4j.FileAppender
log4j.appender.FILE.File=D:/temp/trace.log
#
# FILE uses a PatternLayout
#
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.ConversionPattern=%d [%t] %-5p %c{1} - %m%n
The second configuration item enables DEBUG messages and higher—in other
words all messages—for components in the hierarchy under itso.was4ad.ejb.
This is all our EJB code. We could have chosen to specify one or more
destinations for these messages—these would have been in addition to the
destination inherited from the root category.
The final part of the configuration file specifies the format of the output message
written to the file. It uses a standard Log4J layout—a PatternLayout—again you
can implement your own layouts. The PatternLayout formats the log message
by parsing a format string similar to that used by the printf function in C.
Our format string specifies that each log message includes the following items:
► A date and time stamp
► The ID of the thread that logs the message
► The message priority name
► The last part of the name of the component that logs the message
► The message itself
► A new-line character
An example of this message format can be seen in Figure 13-30, which shows
the log statements that are written to the log file when the transfer method of
the AccountManager EJB is invoked, passing in an invalid account ID.
Figure 13-30 Debug messages written using the Log4J log wrapper
Logging conclusions
In summary, we hope the discussions presented in this section have highlighted
the need to include message logging as a fundamental component of a
WebSphere application development project. We believe the benefits of
implementing such a component at the earliest stages of development easily
mitigate the effort involved, and ultimately lead to easier application deployment,
and improved manageability in production. In real terms, for developers, this
means fewer late nights and weekends getting systems live and fixing bugs in
production.
Although there many things you can do to tune an application once it is deployed
into WebSphere, experience tells us that the most dramatic performance
improvements—and performance problems—are driven by design and
implementation of an application. We recommend you consider performance and
scalability issues from the very beginning of your project, and set down
development standards that encourage good practice.
import javax.naming.InitialContext;
import javax.sql.DataSource;
import java.sql.Connection;
...
You have to declare the local JNDI reference to the JDBC resource
jdbc/MyDataSource in the deployment descriptor of the application that uses this
code. When the application is installed into WebSphere the reference is bound to
a DataSource defined in the global JNDI namespace using the WebSphere
administration tools.
Using System.out
Invoking System.out.println can seriously degrade application throughput
because the write to the standard output stream is synchronous. In the
WebSphere environment standard error and output are redirected to disk files.
The println method will not return until the information has been written to the
file system. This can cause bottlenecks, because disk storage is relatively slow.
String concatenation
Manipulating String objects in Java is expensive, due to that fact that each string
is represented by an immutable Java object. When you use the + or += operators
to concatenate strings temporary String objects are created and discarded. If
you are going to perform string concatenation, a java.util.StringBuffer will
perform better. For example the code:
String[] names = request.getParameters("name");
String msg = "Names:";
for (int i = 0; i < names.length; i++) {
msg += " ";
msg += names[i];
}
return msg;
If a JSP does not use information stored in an HTTP session, prevent it from
obtaining a session reference using the following JSP page directive:
<%@ page session="false" %>
You should also consider eliminating the getter and setter methods from entity
EJBs that are generated using VisualAge for Java. Although these methods are
convenient they can encourage the proliferation of RMI calls as clients of the EJB
use each method in turn. Replace getters and setters with business methods that
enforce correct behavior according to your business logic, rather than allow
Figure 13-32 EJB Method Extensions item in the assembly tool tree view
Next, in the right hand pane, select the method you want to change, and check
the Access intent box (Figure 13-33). Select Read from the drop-down box
labeled Intent Type and click Apply to confirm the change.
The most strict isolation level is Serializable. An EJB that specifies this isolation
level is guaranteed to get consistent results from the database for the duration of
each transaction.
To achieve this behavior, every row that satisfies an SQL SELECT issued by the
EJB or the underlying persistence layer is locked for the duration of the
transaction. In development, where individual developers may have separate
databases or separate schemas in a single database, this may not cause any
problems. In a production system with multiple concurrent clients this strict
locking can cause significant bottlenecks.
The isolation level Read committed is adequate for many applications. Although
a more strict isolation level may appear to be a safer choice, you must consider
the impact of such a change on the performance of the application in production.
Check the Isolation level attributes box and select the appropriate isolation level
from the list (Figure 13-35). Once you have made your selection click Apply to
save the changes.
Figure 13-35 Modifying the isolation level for all methods in the remote interface
There are still valid scenarios, however, where you want to be able to host
multiple applications in the same instance or cluster of the advanced edition of
WebSphere. The single server edition does not provide any workload
management or clustering capabilities, for example. If your developers have
Windows desktops but your deployment environment is UNIX it is a good idea to
test your application on the target platform from early in the development
cycle—in some cases it may be absolutely necessary, perhaps because of other
software components that are only available in the deployment environment.
There are four areas where two instances of the same application may conflict
when running in the same WebSphere cluster:
► The application name
► The Web application in the URI namespace
► The EJBs in the JNDI namespace
► Access to database and other resources
While we cover the case where the two instances have to connect to different
databases. The scenario where the two versions accessing the same database
require different database schemas is beyond the scope of this discussion.
Tip: If you follow these instructions you will only be able to use the virtual
hosts from the local machine where they are defined. To correctly define the
host aliases they must be defined globally.
To define a new virtual host, click the Virtual Hosts folder in the administrator’s
console tree view, and select New from the pop-up menu. This opens the Create
Virtual Host dialog (Figure 13-37).
In the Create Virtual Host dialog, enter a name for the new virtual host in the
Name field. This is the name that will appear in the console. To add a host alias
for this virtual host, click the Add button, enter the host name in the dialog that
pops up, then click OK.
Remember to include the port number if the Web server listens on a port other
than port 80; if the web server is using SSL the port is usually set to port number
443.
Figure 13-38 Specifying the virtual host using the deployment wizard
In the dialog that pops-up (Figure 13-39), select the appropriate virtual host from
the drop-down list, and click OK.
Figure 13-40 Specifying the virtual host name using the assembly tool
When you deploy the module into WebSphere, the virtual host you set using AAT
is automatically selected.
Figure 13-41 Changing the Web application context root using the assembly tool
We can define the global JNDI name to use for each EJB using AAT or the
WebSphere application installation wizard. Figure 13-42 shows how AAT can be
used to specify the JNDI name for an EJB.
The global JNDI name can also be specified when the application is installed into
the WebSphere environment. Figure 13-43 shows how the application
installation wizard invoked from the administrator’s console can be used to
specify or override the binding of an EJB to a JNDI name.
Figure 13-43 Specifying an EJB JNDI name in the application installation wizard
Our application’s EJB client code uses EJB references to locate EJB home
objects. All of our applications that use EJBs look for them in the EJB client’s
local JNDI namespace, under java:comp/env, for example:
java:comp/env/ejb/AccountManager
These names are coded as constants in the classes that are EJB clients. When
we use the assembly tool to build our application we declare the references used
by each application client, Web application and EJB component. Different
components may use different local names to refer to the same EJB, or the same
local name to refer to different EJBs. When a component looks up an EJB the
WebSphere runtime maps the local EJB reference to the correct global JNDI
name for the EJB that particular component needs.
When we install client code that uses the EJB, we simply specify the correct
binding for the version we require, without any need to modify our own code
(Figure 13-44).
Binding Binding
AccountManager AccountManager
version 1 version 2
Global JNDI name: Global JNDI name:
version1/itso/was4ad/ejb/account/AccountManager version2/itso/was4ad/ejb/account/AccountManager
We use the WebSphere AAT tool to manage EJB references (Figure 13-45).
We can also define the binding that maps each EJB reference to a JNDI name
using AAT, using the Binding tab (Figure 13-46).
Alternatively we can specify or modify the binding when we install the application
using the application installation wizard (Figure 13-47).
The local JNDI name used must be defined in the component’s deployment
descriptor, which can be edited using AAT (Figure 13-48).
When you install a new application version define the resources that are unique
to the new version with unique global JNDI names in the WebSphere
environment, using the administrator’s console or other appropriate WebSphere
tools such as wscp. You can then specify the bindings for the new application
version using AAT, or during installation using the application installation wizard.
One way to do this is to use the Ant built-in replace task. This task copies a
source file to a new location, replacing occurrences of named tags with a
specified value. Tags are specified in the source file using the syntax:
@TAG_NAME@
While the starting point of customer involvement with SCM varies, no customer
can afford to ignore this area. In fact, after implementing SCM processes, the
resulting improvements in IT reaction times to meet business demands could
well prove to be a key success factor for being successful with e-business.
SCM is one of the key areas that has to be addressed when developing and
maintaining applications. This is not only true for managing the software
configuration within your development environment, but also applies to the
software configuration within the production environment.
Pressure to deliver faster and more complex applications makes it more urgent
to implement SCM. At the same time, businesses that are developing and
deploying applications in the e-business space may find themselves open to
exposure when SCM problems occur.
This calls for an end-to-end (E2E) approach for SCM throughout the complete
application life cycle. However, addressing all aspects of SCM would be a book
in itself.
Reference
When writing this redbook we ran out of time to investigate and test a complete
SCM approach.
First we describe how to use the WebSphere Application Assembly Tool (AAT) to
assemble the application into J2EE modules that can be deployed into a
WebSphere environment for testing. We then describe the deployment process
itself, explaining how to deploy into both the full and single-server versions of
WebSphere Application Server.
After that we discuss application debugging, and describe how we can use the
facilities provided by VisualAge for Java and the IBM Online Trace (OLT) and
Distributed Debugger products to debug WebSphere applications.
Finally we introduce JUnit, an open source framework for unit testing Java
applications. We describe how to use JUnit to create test cases and discuss how
to use the tool to unit test EJB components running in WebSphere.
We also discuss how the ejbdeploy can be used to help validate and migrate
version 1.0 EJBs to the new 1.1 specification level, and describe the new
XML-based CMP persistence mapping supported by the tool.
This is a useful shortcut for administrators who do not want to specify the
global JNDI name for every EJB every time they install a new version of an
application. Binding information is supplemental to the J2EE deployment
information, however, and is not portable to other application server vendors’
products.
If we start the tool from a command window, this window remains in the
background, and it must not be closed, or the assembly tool is closed as well.
This window displays tracing information for changes in the properties of the
module’s elements.
It is also possible to start the AdminServer from the command line, typing:
net start “IBM WS AdminServer“
To start the Application Assembly Tool from the Administrative Console, select
Tools -> Application Assembly Tool.
It is possible to perform these creation tasks either using the property dialogs or
the corresponding wizard. Wizards require minimum information to complete the
process. They ask for the required information and fill in the rest with the default
values.
The property pane displays the properties for the element selected in the
navigation pane. It is possible to hide this pane by selecting View -> Show
Property Pane. Fields indicating required properties are signaled with a red
asterisk.
Navigation
pane
In addition to the menu bar, the toolbar provides access to the main functions
related to module creation and administration (Figure 15-4).
In Figure 15-5 we can see the structure of the navigation pane for a Web module.
References to EJBs
and their JNDI names
References to external
resources (DataSources,
Messaging systems)
It is possible to drag-and-drop files from other Web modules (opening them in the
AAT), so that both the configuration and the file are copied.
If we use the Import option, only the configuration is copied, and we have to add
the file manually to the Files folder. The same applies to the New option, though
in this case we have to type in the configuration for the new file.
When expanding the WAR file, the files are placed as shown in Figure 15-6.
To launch the wizard, select File -> Wizards -> Create Web Module Wizard, or
either click on the last button on the right (the Wizards button), and then select
Create Web Module Wizard.
The first window lets the user specify the basic properties of the Web module
(Figure 15-7).
After completing this information, the next step is to add files to the module
(Figure 15-8).
To add the files, we open the folder that contains them in the browser window
and add them.
To add the static resources files (JSPs, HTML, images), we select Add Resource
Files and select D:\PiggyBank\src\web, the folder containing the static Web
content.
Skipping through the Icons screen, the next step we have to care about is Adding
Web Components. Here, we are able to register the servlets and JSPs in our
Web module.
Select New to register new components or Import to get components from other
Web modules or enterprise applications (Figure 15-9).
For every component added, we have to type at least the information regarding
the Component name (as it appears in the Web module navigation tree if no
display name is introduced) and the appropriate file.
Skipping the security roles definition screen, we show next the Specifying Servlet
Mappings screen. Here, for example, we add the mappings for the PiggyBank
Controller (Figure 15-11).
By clicking on Add, a new panel is displayed and we select the servlet name and
type the URL mapping.
The next two panels allow the assembler to specify references to external
resources (for example, databases or messaging systems) and context
parameters for the servlets running in the Web application.
The next steps specify default Error pages and MIME mappings.
To configure Tag libraries in the next step we specify both the name of the file
and its location within the WAR file (Figure 15-12).
In the next panel, we can specify the welcome file for our application, index.html
(Figure 15-13).
In the last panel of the wizard, we specify the EJB references used by elements
of the Web module.
The Web module creation is finished when we click on the Finish button, and we
can see the module structure in the standard AAT interface (Figure 15-14).
To save, select File -> Save As and navigate to the desired saving location.
We describe now how to setup a Web module with its basic features, using again
the PiggyBank application.
In the welcome screen (shown in Figure 15-1 on page 391), select Web Module
or select File -> New -> Web Module. The skeleton of the module is shown in the
navigation pane (Figure 15-15).
It is possible to build WAR files from existing Web modules, by importing the
required files to the new archive.
Following with the second scenario, to import the Web components to the new
module we have to:
► Add the files (class, JAR, etc.) from the existing module
► Import the configuration into the Web Components folder
Add files to the module by selecting Files and then Class Files in the navigation
pane, and Add Files from the context menu. The process is the same that the
performed using the wizard:
► Open the JAR (or ZIP) file where the compiled code for the servlet is
► Select the appropriate class file, navigating within the folders if necessary
► Click Add and then OK
The added files are placed under the \WEB-INF\classes directory of the web
module.
The resource files (utility classes used by the other elements of the module),
added as a JAR file, are placed under \WEB-INF\lib.
We add the JSP and static files (HTML, images) to the Resource Files section
too. They are placed under the directory \WEB-INF.
Import the configuration details by selecting Web Components and Import from
the context menu.
In the case of adding new files (not previously configured in other modules), we
select the New option. Importing is useful when we have setup “complicated”
configurations for the components (with long lists of initialization parameters that
are tedious to rewrite), and it acts like a copy-paste mechanism between
modules.
Again, the screens shown are the same as the screens displayed for the wizard
section (see Figure 15-10 on page 398). The JSP files are added in the same
way.
After having all the required files added to the module, the next step is to create a
mapping for the servlets: in the navigation pane, select Assembly Properties ->
Servlet Mapping -> (Right button) and select New. A window is displayed where
we must enter the URL pattern name and the servlet associated to it (see
Figure 15-11 on page 399)
After clicking OK, the new servlet mapping is displayed in the Servlet Mappings
list (Figure 15-16).
For the PiggyBank application, the addition of the other features (tag libraries,
error and welcome pages) is done through the node in the navigation pane, in a
similar way as how it is done by the wizard.
With VisualAge for Java 4.0 we can export the EJBs to a 1.1 undeployed JAR file
(see Chapter 11, “Development using VisualAge for Java” on page 259), and
then generate the deployment code either with the AAT or with the command line
EJB deployment tool (see “EJB deployment tool” on page 418), but it may be
necessary to fix some details of the code (the aspects that have to do with the
differences between the 1.0 and 1.1 specifications).
The structure of the navigation tree for an EJB module is shown in Figure 15-17.
EJB-specific
configuration
General assembly
properties
Figure 15-18 shows schematically how the deployment descriptor information fits
in the AAT tree structure.
Transaction
attributes
Security Method
Permissions
Security roles are also configurable through the Security Roles panel in the
enterprise application (EAR) file.
Class path information added through the module main property panel is written
to the manifest.mf file. Class path entries must be separated by spaces
(Figure 15-19).
The Bindings pane is the place to detail the JNDI name of the bean, as well as
the default DataSource to be used in case of an Entity bean.
CMP for entity EJBs can be setup in the CMP Fields pane under each EJB.
Transaction data (such as isolation levels) has to be setup under the Container
Transactions node (Figure 15-21).
The Bindings tab lets us specify the default DataSource for the module, but if we
specify a DataSource for each bean, the default configuration is overridden
(Figure 15-22).
We generate the deployed code once we have setup the deployment descriptor
information.
Let’s pick up the EJB 1.0 undeployed file from the PiggyBank application,
PiggyBankEJBs10.jar, developed in VisualAge for Java, and open it in the AAT.
We click OK on this window (our EJBs do not require dependent class path
specification), and the EJB file is opened in the AAT interface.
At this point, we can edit the deployment descriptor to configure the module
properties. By selecting the JAR file in the navigation pane, we can see its
properties in the property pane (setting this pane to be visible as described in
“Using the interface” on page 391), and we can also specify properties for every
EJB in the JAR file.
To perform this task in the Application Assembly Tool, select File -> Generate
Code for Deployment (Figure 15-24).
Figure 15-24 Generating deployment code for EJB 1.1 JAR files
Errors appearing during the code generation process are shown in the text area.
If any errors or exceptions appear while executing RMIC, the deployment
process is aborted and the deployed code is not generated.
We have to generate the deployment code only when the EJBs have been fixed
to be 1.1 compliant.
Once the code generation process is completed, we have an EJB 1.1 JAR file
prepared to be added to an enterprise application.
It is also possible to deploy EJBs using the command line tool EJBDeploy directly
(see “EJB deployment tool” on page 418).
In the Adding Files screen, we select utility files used by the EJBs (libraries).
Next (skipping the icons section), we add the EJBs to the module (Adding
Enterprise Beans). Click on New to add new beans to the module or on Import to
import existing EJBs from another module.
If, for example, we are combining EJBs from two modules into one, we can just
drag-and-drop the beans from the modules, so that both the configuration and
the files are copied, and the process is easier. However, it is not possible to do
this from the wizard screen.
After finishing, we can modify any of the properties of the JAR file (that are
written to the deployment descriptor), by selecting them from the navigation pane
and editing them on the property pane.
When we create a new application client in the AAT, several steps have to be
followed to set up the module (Figure 15-27):
► Add the client classes to Files: for PiggyBank, we have only one class,
itso.was4ad.client.StandaloneClient for the command line client.
For the Swing client, we add all the classes in the itso.was4ad.client.swing
package.
► Specify the executable class (the one that contains the main method). This
class appears in the property pane in the Main Class field.
► Set up the class path: we add any JAR files that contain classes used by the
client.
For the PiggyBank, as we are assembling all the modules in a single EAR file.
We do not add the common code and EJB client JAR files to the client
application module, but we include references to them in the class path.
Note: For more information about using JNDI see “Using JNDI” on page 326.
Component modules
and their assembly
properties
Deployment descriptors
and utility classes
Figure 15-29 Enterprise application navigation tree
In the first panel (Figure 15-30) we have to enter the application name and file
name (in a similar way as to the other modules).
The next step is to add supplementary files that the enterprise application uses.
These can be icon libraries or other utilities. The icons used to represent the file
are selected in the next panel.
The Web modules and application clients are added in the next two steps in a
similar way (we add the modules that we have created in previous sections).
The last step is the specification of security roles for the whole enterprise
application.
After completing the wizard tasks, and prior to the installation in the application
server, we have to configure some details such as the binding information
(resolving the JNDI names for the EJBs).
In case we have not generated the deployed code for our EJBs, we do this prior
to the installation of the enterprise application in WAS (though the application
server provides the option of deploying the code when installing).
We can also assemble the EAR file by dragging and dropping the previously
created modules (so that we copy both the configuration and the files) from
another windows of the AAT, speeding up the creation process.
The deployed code includes the Remote Method Invocation (RMI) stub code
generated by the rmic compiler, as well as persistence mapping code for
container managed persistent (CMP) EJBs. We describe the generation of
persistence mapping code in more detail shortly.
The AAT also invokes ejbdeploy when you choose the File -> Generate Code for
Deployment menu option, as described in “Generating deployed code” on
page 410.
These files are created by VisualAge for Java Version 4.0 when we use the
Export -> EJB 1.1 JAR menu option to create an EJB JAR file. VisualAge for
Java creates these files using the database schema and mapping information
stored in the repository and managed using the schema and map browser tools
that enable us to perform meet-in-the-middle mapping between CMP EJBs and
the database tables.
If the schema and map files do not exist in the EJB JAR file, ejbdeploy creates a
new schema and map based upon its default mapping rules. We can see this if
we run ejbdeploy against a version of our PiggyBank EJB JAR that does not
contain a schema or map (Figure 15-33).
The tool also generated a Table.ddl file—this contains DDL that we can use to
create our database tables (Figure 15-35).
If we run the same command against a JAR that does contain a schema and
map, the default map is missing, and the message highlighted in Figure 15-33 is
not reported, as seen in Figure 15-36.
If the default mapping is not suitable for our target database we can extract the
schema and map files from the deployed JAR file, edit them to suit our needs,
then create a new JAR file which we can then run through ejbdeploy a second
time to generate new persistence mapping code for us, based on the modified
files.
Tip: If you decide you want to edit the generated schema and map files
yourself, you may find it helpful to use an XML-aware editor.
Schema file
An extract from the PiggyBank schema file for DB2 showing the CUSTOMER table is
shown in Figure 15-37.
If we want to modify the database schema we must modify this file. Changing the
table and column names is fairly straightforward—we simply locate the names
and alter them as required. Changing the type of columns is a little more
involved—we must alter the href to refer to the correct SQL primitive for the
column type we want.
Where, DeployTool is the location where you installed the standalone tool
downloaded from VADD. In the AE and updated AEs products we expect, but
cannot confirm, that this directory will be located in the WebSphere install
directory.
Figure 15-38 shows an extract from the document that defines the primitives for
Oracle.
<RDBSchema:SQLPrimitives xmi:version="2.0"
xmlns:xmi="http://www.omg.org/XMI"
xmlns:RDBSchema="RDBSchema.xmi"
xmi:id="SQLPrimitives_1" domain="ORACLE_V8">
<types xmi:type="RDBSchema:SQLBinaryLargeObject"
xmi:id="SQLBinaryLargeObject_1" externalName="BINARY LARGE OBJECT"
name="BLOB" jdbcEnumType="2004" domain="ORACLE_V8"
requiredUniqueInstance="false" renderedString="BLOB"
typeEnum="BINARYLARGEOBJECT"
formatterClassName=
"com.ibm.etools.rdbschemagen.formatter.oracle.SimpleTextFormatter"
length="4" multiplier="G"/>
<types xmi:type="RDBSchema:SQLCharacterStringType"
xmi:id="SQLCharacterStringType_1" externalName="CHARACTER"
name="CHAR" jdbcEnumType="1" domain="ORACLE_V8"
requiredUniqueInstance="true" renderedString="CHAR"
typeEnum="CHARACTER"
formatterClassName=
"com.ibm.etools.rdbschemagen.formatter.oracle.CharacterTextFormatter"
characterSet="800" length="1"/>
<types xmi:type="RDBSchema:SQLCharacterStringType"
xmi:id="SQLCharacterStringType_2" externalName="CHARACTER VARYING"
name="VARCHAR2" jdbcEnumType="12" domain="ORACLE_V8"
requiredUniqueInstance="true" renderedString="VARCHAR2"
typeEnum="CHARACTERVARYING"
formatterClassName=
"com.ibm.etools.rdbschemagen.formatter.oracle.CharacterTextFormatter"
characterSet="800" length="1"/>
<types xmi:type="RDBSchema:SQLNumeric" xmi:id="SQLNumeric_6" externalName="DECIMAL"
name="NUMBER" jdbcEnumType="3" domain="ORACLE_V8"
requiredUniqueInstance="true" renderedString="NUMBER" typeEnum="DECIMAL"
formatterClassName=
"com.ibm.etools.rdbschemagen.formatter.oracle.NumericTextFormatter"
precision="5" scale="0"/>
...
<ejbrdbmapping:EjbRdbDocumentRoot xmi:version="2.0"
xmlns:xmi="http://www.omg.org/XMI"
xmlns:ejbrdbmapping="ejbrdbmapping.xmi" xmlns:ejb="ejb.xmi"
xmlns:RDBSchema="RDBSchema.xmi" xmlns:Mapping="Mapping.xmi"
xmi:id="EjbRdbDocumentRoot_1" outputReadOnly="false"
topToBottom="true">
<helper xmi:type="ejbrdbmapping:RdbSchemaProperies"
xmi:id="RdbSchemaProperies_1" primitivesDocument="DB2UDBNT_V71">
<vendorConfiguration
href="RdbVendorConfigurations.xmi#DB2UDBNT_V71_Config"/>
</helper>
<inputs xmi:type="ejb:EJBJar" href="META-INF/ejb-jar.xml#ejb-jar_ID"/>
<outputs xmi:type="RDBSchema:RDBDatabase"
href="META-INF/Schema/Schema.dbxmi#RDBDatabase_1"/>
<nested xmi:type="ejbrdbmapping:RDBEjbMapper" xmi:id="RDBEjbMapper_1">
<helper xmi:type="ejbrdbmapping:PrimaryTableStrategy"
xmi:id="PrimaryTableStrategy_1">
<table href="META-INF/Schema/Schema.dbxmi#CUSTOMER"/>
</helper>
<inputs xmi:type="ejb:ContainerManagedEntity"
href="META-INF/ejb-jar.xml#Customer"/>
<outputs xmi:type="RDBSchema:RDBTable"
href="META-INF/Schema/Schema.dbxmi#CUSTOMER"/>
<nested xmi:id="Customer_id---CUSTOMER_ID">
<inputs xmi:type="ejb:CMPAttribute"
href="META-INF/ejb-jar.xml#Customer_id"/>
<outputs xmi:type="RDBSchema:RDBColumn"
href="META-INF/Schema/Schema.dbxmi#RDBColumn_1"/>
<typeMapping href="JavatoDB2UDBNT_V71TypeMaps.xmi#int-INTEGER"/>
</nested>
<nested xmi:id="Customer_name---CUSTOMER_NAME">
<inputs xmi:type="ejb:CMPAttribute"
href="META-INF/ejb-jar.xml#Customer_name"/>
<outputs xmi:type="RDBSchema:RDBColumn"
href="META-INF/Schema/Schema.dbxmi#RDBColumn_2"/>
<typeMapping href="JavatoDB2UDBNT_V71TypeMaps.xmi#String-VARCHAR"/>
</nested>
</nested>
[ ACCOUNT TABLE NOT SHOWN .............]
</ejbrdbmapping:EjbRdbDocumentRoot>
The inputs and outputs tags are paired—in our example there is a pair that
maps from the JAR file to the database in the schema, another that maps from
the EJB in the deployment descriptor to the table in the schema document, and
two further pairs that map each of the two CMP fields for the customer EJB to the
appropriate columns in the database schema.
We discuss each in turn, using the standard sample EJBs that are delivered with
VisualAge for Java in our examples.
In this case we only found warnings, so we can run the EJBs in WebSphere 4.0
unmodified if we desire. If we want to make our beans truly J2EE compliant,
however, we must make the changes suggested by the tool.
If we are using Version 4.0 of VisualAge for Java, we can easily create these
XML deployment descriptors using the Export -> EJB 1.1 JAR menu option on an
EJB group. If we do not have the latest version of VisualAge for Java, however,
that option is not available to us.
We can use the EJB deployment tool to save us the effort of manually creating
the XML descriptors, either by hand in an editor, or by entering the information
into AAT.
We start with a simple undeployed EJB 1.0 JAR, sample10.jar, and run it
through ejbdeploy (Figure 15-41).
Then we extract the XML deployment descriptors from the deployed JAR file
(Figure 15-42).
If we did not use VisualAge for Java to create our 1.0 EJBs, however, but we did
deploy them into Version 3.5 of WAS, we would have had no choice but to accept
the default top-down CMP persistence mapping in that version of the product.
Fortunately, the EJB deployment tool provides an option that can help solve this
problem. We can specify -35 as an option on the ejbdeploy command line.
When we use this option to generate the schema and map files for the EJB 1.0
entity bean in the VisualAge for Java samples, we get a schema file that contains
the following extract:
<RDBSchema:RDBTable xmi:id="INCREMENTBEANTbl" name="INCREMENTBEANTbl"
primaryKey="SQLReference_1" schema="RDBSchema_1" database="RDBDatabase_1">
D:\WebSphereSSE\AppServer\bin>earexpander
-ear ..\installableApps\PiggyBank.ear -expandDir ..\temp\PiggyBank
-operation expand
IBM WebSphere Application Server Standard Edition, Release 4.0
J2EE Application Expansion Tool, Version 1.0
Copyright IBM Corp., 1997-2001
D:\WebSphereSSE\AppServer\bin>
D:\WebSphereSSE\AppServer\bin>
This tool can be useful when we want to expand an application for viewing and/or
updating (although we can view the application by using the Application
Assembly Tool, we cannot edit single files, such as JSPs).
For example, when we want to change a class file, we can just expand the
module file and change the class (in installed applications we can do this directly
on the directory structure under $WASPATH$\installedApps, but we have to restart
the server afterwards for the changes to take effect).
Using this tool to install the applications is equivalent to do the installation from
the browser-based administrative console, because the console calls the
SEAppInstall tool to perform this task. The difference that might make this tool
more useful for experienced users is that it allows several options for each
command, while the console uses the default options, that might not be suitable
in all occasions.
When the option -ejbDeploy true is selected, the SEAppInstall tool calls the
EJBDeploy tool to perform the deployment of all the EJBs in the module. For
usage of the EJBDeploy refer to “EJB deployment tool” on page 418.
In general, when we are installing a module previously assembled with the AAT
(or other tool such as Ant, as we showed in “Using Ant to build a WebSphere
application” on page 197), we should not have to redeploy the EJBs, and all the
necessary binding information should be already included in the module
deployment descriptor, but the SEAppInstall tool provides options for performing
both of these task in case it is necessary.
The SEAppInstall tool updates the server configuration file with the changes due
to the operation performed (includes new data for the newly installed applications
or erases data related to the uninstalled ones). The default value for the server
configuration file is server-cfg.xml (regardless of the configuration file we have
started the server with).
java com.ibm.websphere.install.se.SEApplicationInstaller
-uninstall <application name>
[-delete <true | false>]
[-configFile <server configuration file>]
[-nodeName <name of node>]
[-serverName <name of server>]
java com.ibm.websphere.install.se.SEApplicationInstaller
-export <application name>
[-configFile <server configuration file>]
-outputFile <name of the ear file to create>
java com.ibm.websphere.install.se.SEApplicationInstaller
-list <apps | wars | ejbjars | all>
[-configFile <server configuration file>]
java com.ibm.websphere.install.se.SEApplicationInstaller
-extractDDL <application name>
[-DDLPrefix <Prefix to apply to front of all DDL file names>]
[-configFile <server configuration file>]
java com.ibm.websphere.install.se.SEApplicationInstaller
-validate <app | server | both | NONE>
[-ear <ear file>]
[-configFile <server configuration file>]
If you specify "-validate app" or "-validate both", you must
include the "-ear" option. If you specify "-validate server"
or "-validate both", the "-configFile" option is optional.
D:\WebSphereSSE\AppServer\bin>seappinstall -install
..\installableapps\SampleApp.ear -configFile ..\config\server-cfg.xml
-expandDir ..\installedapps\SampleApp.ear -nodeName 23bk55y -ejbDeploy false
-precompileJsp false -validate both -interactive false
IBM WebSphere Application Server Release 4, AEs
J2EE Application Installation Tool, Version 1.0
Copyright IBM Corp., 1997-2001
The option to list applications is useful when we have several configuration files
with different installed applications. We can list the enterprise application, the
Web modules and the EJB modules (or all of them). Figure 16-7 shows an
example of using this option.
Installed Applications
--------------------------------------------------------------------
1) another
2) Server Administration Application
3) sampleApp
4) PiggyBank Application
D:\WebSphereSSE\AppServer\bin>startstd ?help
IBM WebSphere Technology For Developers, Release 1.0
Copyright IBM Corp., 1997-2001
Usage:
java com.ibm.ws.runtime.StandardServer
[-configFile <server configuration file>]
[-nodeName <name of node>]
[-serverName <name of server>]
[-traceString <package name>]
[-traceFile <file name>]
In the Single Server Edition, it is possible to have only one application server
installed and only one machine, as it is intended for unit testing (each developer
can have it installed on his/her machine).
The default file to use when loading the server is server-cfg.xml, though we can
have other configuration files with different installed applications, trace level
settings, and so forth (this acts as having different servers, though we can start
only one at a time, unlike in the Advanced Edition). We will only be able to launch
the applications that are installed in the currently loaded configuration file. The
administrative application is included in a separate configuration file,
admin-server-cfg.xml.
To generate new configuration files, we can edit them by hand or modify the
configuration on the console and then save it to a new file. The console provides
the option of creating a new configuration file using the current one as a
template; this option is useful for users not familiar with the syntax of the
configuration files.
The messages displayed in the command window when starting the server are
essentially in the same format as the ones displayed in the WAS Advanced
Edition standalone Admin Console ( a sample output is shown in Figure 16-9).
The left frame allows us to navigate within the structure of the application server,
while the right frame displays the configuration settings and options (similarly to
the AE edition).
Note that the current server configuration file is displayed in the screen. It is
possible to open other configuration files by clicking on Configuration (top of the
screen) and selecting the file we want to open. We have to refresh the
configuration tree (left pane) manually. This feature lets us switch configurations
for editing without having to stop and restart the server to load another
configuration file.
Installing applications
To install a new application, select Nodes -> nodename -> Enterprise
Applications. The right frame shows a list of the installed applications. By clicking
on Install, a new page is displayed where we enter the application file name
(Figure 16-12).
The next steps guide us through the Application Installation Wizard, allowing us
to select security roles, JNDI bindings for EJBs, virtual hosts and other
parameters. This is equivalent to using the SEAppInstall tool with the option
-interactive true (the default option).
Interactive mode means that we are able to change the JNDI names for EJBs,
EJB resource references or other resources, the virtual host, and so forth. The
application installer extracts the information from the deployment descriptors and
display it in the screen so that we can make changes.
After installing an application, the console displays a warning in the main frame
indicating that the Web server plugin has to be regenerated and the configuration
has to be saved (Figure 16-13).
The new application is installed under \installedApps, in a folder with the same
name as the EAR file: ..\installedApps\application-name.ear.
The options for the server configuration are listed under Default Server
(Figure 16-14).
When setting up the environment for unit testing, we are interested in configuring
these features:
► OLT settings—activate OLT to debug the application using the Distributed
Debugger (see more about this topic in Chapter 17, “Debugging the
application” on page 467).
► Web container settings—customize the properties for the Web container,
such as allowing persistent sessions or cookies.
Setting up resources
The resources are structured all in the same fashion: first it is necessary to
create a resource provider (for example, a JavaMail provider or a JDBC driver).
Later, we can create a resource factory relating to the provider, for example, a
JavaMail session or a JDBC DataSource. These resource factories are available
to the enterprise applications installed in the server.
The resources used by the applications are listed under the Resources folder
(Figure 16-15).
Uninstalling applications
Applications are uninstalled from the Enterprise Applications panel. After
uninstallation, it is necessary to regenerate the plugin configuration (the
appropriate warning appears in the console screen).
Uninstallation through the console does not delete the files from the uninstalled
application (like executing SEAppInstall with the default option -delete false),
so we have to delete them manually.
If we have just uninstalled an application and we want to install another one with
the same name (for example, a newer version of the same application), we have
to stop the server, delete the files, and start the server again before we are able
to make the new installation. If we attempt to delete the files without stopping the
server, the console does not allow us to do it and it displays a screen with
instructions.
D:\WebSphereSSE\AppServer\bin>stopserver ?help
stopServer
Syntax: stopServer [-configFile "<server configuration file path>"]
You can also use the Start menu and select Start -> Programs -> IBM
WebSphere -> Application Server v4.0 -> Start Admin Server.
The console in Version 4.0 is quite similar to previous versions, though several
options have changed to adapt the server to the J2EE model of packaging.
Figure 16-19 shows the user interface.
The structure is similar to the Web-based console in the Single Server Edition,
but the stand alone console includes some more options as the J2C Resources
Adapters (for a review on the differences between AE and AEs, see “Differences
between the AE and AEs versions” on page 66).
The Console Messages panel displays the tracing messages of the server and of
the applications that use the WebSphere Tracing facility (see details about this
feature in “Using the WebSphere JRas facility” on page 341).
The Options button lets us select the event level (Fatal, Warning or Audit) as well
as properties for the log file. If we want to examine a trace, we use the Details
button, which displays a window with the complete information for the event. An
example is shown in Figure 16-20.
Let’s consider the following scenario: we have a standard client for EJBs
deployed in the application server and we want to unit test the EJBs. In this case
we can deploy the EJB module into the server and use the client to test it, and it
is not necessary to assemble both modules in an EAR file.
Applications are installed using the Install Enterprise Application wizard. This
wizard lets us specify binding information for EJBs, security roles and other
properties such as the virtual host and the server to which the application will be
installed. The last option is the deployment of the module. In general, modules
should have been deployed previously, in the assembly phase (see Chapter 15,
“Assembling the application” on page 389), but we can redo the process if
necessary.
The console offers the possibility to export the configuration of the installed
application (as an EAR file with updated binding information), so that, if later we
decide to reinstall it again, we can choose this exported file that contains the
most recent binding configuration.
Note: In the Single Server Edition, the equivalent task is performed through
the SEAppInstall tool, with the -export option.
Setting up resources
Resource configuration can be done either before or after the application
installation. As in the case of the Single Server Edition, the resources are
structured as providers and factories.
The Admin Console provides the Create DataSource Wizard to perform this task.
It allows us to create a DataSource based on an existing JDBC driver, or either
install a new driver for the new DataSource.
In case we do not want to use the wizard, we can create the DataSource and/or
JDBC driver directly under the Resources folder.
When installing WebSphere Application Server Version 4.0, you are prompted to
specify which plugin to use, so that it will be configured in the Web server
configuration files. Figure 16-23 shows the entries added to the httpd.conf file
for the IBM HTTP Server (IHS).
<?xml version="1.0"?>
<Config>
<Log LogLevel="Inform" Name="D:\Websphere\Appserver\logs\native.log"/>
<VirtualHostGroup Name="default_host">
<VirtualHost Name="*:80"/>
<VirtualHost Name="*:9080"/>
</VirtualHostGroup>
<ServerGroup Name="Default Server">
<Server Name="Default Server" SessionID="991347369197">
<Transport Hostname="*" Port="9080" Protocol="http"/>
</Server>
</ServerGroup>
<UriGroup Name="PiggyBank webapp_URIs">
<Uri Name="/"/>
</UriGroup>
<Route ServerGroup="Default Server" UriGroup="PiggyBank webapp_URIs"
VirtualHostGroup="default_host"/>
</Config>
This link leads us to the Web Server Plug-in configuration page, where we can
regenerate the configuration file by simply clicking on the button.
Advanced Edition
In the Advanced Edition, manual regeneration is also possible through the
administrative console.
D:\WebSphereSSE\AppServer\bin>genplugincfg -help
IBM WebSphere Application Server Standard Edition, Release 4.0
Plugin Configuration Generator, Version 1.0
Copyright IBM Corp., 1997-2001
java com.ibm.websphere.plugincfg.tool.SEGeneratePluginCfg
-configFile <server configuration file>
[-outputFile <directory to write the config file to>]
[-nodeName <name of node>]
[-serverName <name of server>]
Because only one node can exist in the Single Server Edition, it has to match the
administration node (so this option is redundant).
The key option in the AEs plugin regenerator is the configuration file of the
server. Because no administrative database is used, the configuration is stored
in XML files in the ..\config directory.
D:\WebSphere\AppServer\bin>genplugincfg -help
Usage: java com.ibm.websphere.plugincfg.tool.AEGeneratePluginCfg
-serverRoot <Product Install Directory>
-adminNodeName <Administration Server Node Name>
-nodeName <Local Node Name>
[-nameServicePort <Administration Server Name Service Port>]
[[-traceString <trace spec>]
[-traceFile <file name>]
[-inMemoryTrace <number of entries>]] |
[-help | /help | -? | /? ]
The configuration data is stored in the EAR client application file, and is used by
WebSphere to resolve resources bindings at runtime.
The data for this resource configuration is stored in the EAR file under
\META-INF\client-resource.xmi.
Figure 16-29 shows an example of the client configuration file for the PiggyBank
application, setting up the DataSource associated to a DB2 JDBC driver.
XMLConfig
The XMLConfig tool allows to export/import configuration files from and to the
WAS repository database in the Advanced Edition. It can be used to setup the
configuration manually, so that a single configuration file can be imported into
WebSphere, and we avoid having to perform a long list of tasks in the console.
D:\WebSphere\AppServer\bin>xmlconfig -?
Illegal command line:Odd number of arguments specified.
java com.ibm.websphere.xmlconfig.XMLConfig
{ [( -import <xml data file> ) ||
( -export <xml output file> [-partial <xml data file>] )]
-adminNodeName <primary node name>
[ -nameServiceHost <host name> [ -nameServicePort <port number> ]]
[-traceString <trace spec> [-traceFile <file name>]]
[-substitute <"key1=value1[;key2=value2[...]]">]}
In input xml file, the key(s) should appear as $key$ for substitution.
XMLConfig is also accessible as an option from the Admin Console, in Files ->
Import from XML/Export to XML.
Some of the actions (start, stop) only apply to some resources (for example, you
can start an application server, but not a virtual host), and others apply to specific
operations (for example, export applies only to the partial export operation).
WSCP
WSCP stands for WebSphere Control Program, and it is a command-line and
scripting interface for administering resources in WebSphere Application Server
Advanced Edition.
All console tasks can be performed through WSCP, using commands or scripts.
For example, to emulate a wizard, there is no specific command, but a script can
be written to perform the same task.
In an unit testing environment, we can consider that the administrative tasks are
not going to be many, so we could use only the console, but WSCP provides
ways to automate these administrative tasks. For example, if we have to install
and uninstall our application often to incorporate fixes or new features, we might
consider appropriate to write a WSCP script that automates the task, so that we
do not need to run through the application installation wizard every time.
D:\WebSphereSSE\AppServer\bin>launchClient
IBM WebSphere Application Server, Release 4.0
J2EE Application Client Tool, Version 1.0
Copyright IBM Corp., 1997-2001
where the -CC properties are for use by the Client Container:
-CCverbose = <true|false> Use this option to display additional
informational messages.
-CCjar = The path/name of the jar file within the ear
file that contains the application you wish to
launch. This argument is only necessary when
you have multiple client application jar files
in your ear file.
-CCBootstrapHost = The name of the host server you wish to connect to
initially. Format: your.server.ofchoice.com
-CCBootstrapPort = The server port number to use.
-CCtrace = <true|false> Use this option to have WebSphere
write debug trace information to a file. You may
need this information when reporting a problem to
IBM Service.
-CCtracefile = The name of the file to write trace information
to.
-CCpropfile = Name of a Properties file containing launchClient
specific properties.
-CCinitonly = <true|false> This option is intended for ActiveX
applications to initialize the Application Client
runtime without launching the client application.
where "app args" are for use by the client application and are ignored by
WebSphere.
D:\WebSphereSSE\AppServer\bin>launchClient ..\installableapps\piggybank.ear
IBM WebSphere Application Server, Release 4.0
J2EE Application Client Tool, Version 1.0
Copyright IBM Corp., 1997-2001
Select an option:
Choice (1-6,0)
Version 9.1 of the Distributed Debugger and OLT software is required to work
with WebSphere Version 4.0—this is the first version that supports the Version
1.3 Java virtual machine (JVM) used by this release of WebSphere.
First, let’s take a look at the user interface of the debugger (Figure 17-1).
The main pane, Debug, shows several frames with different information:
► Threads—a list of all the currently running threads is shown. Threads are
grouped by application.
► Variable—this panel shows a list of the variables visible at the current
debugging stage.
► Value—displays the values of the variables. From this panel, we can change
these values to alter the flow of execution.
The Breakpoints pane shows a list of the breakpoints and displays the source
code where the breakpoint has been set.
The Exceptions pane shows a list of exceptions recognized by the VisualAge for
Java runtime. The debugger stops when the selected exception is thrown, both if
it is an uncaught exception or if it is managed later in a catch or finally block.
With the debugger it is possible to fix code errors while debugging, without
having to restart the application. Changes to a method mean that only that
method is recompiled (incremental compilation).
This and other settings can be configured through the context menu Modify
(focusing on the breakpoint) for an existing breakpoint, or by selecting
Breakpoint from the context menu of any source line for a new breakpoint.
Here we can configure a conditional breakpoint (that is, one that only opens the
debugger if a certain condition is evaluated to the boolean value true), using the
On expression option. For conditions involving looping parameters, we can use
the On iteration option, and set up the iteration number where we want the
debugger to be triggered.
To make the debugger open conditionally when hitting a breakpoint, we use the
variable true. If we use the variable false, any message we include as output
are displayed in the console, but the debugger does not halt at the breakpoint.
Instead of using directly these variables, we can use conditional expressions
that, when evaluated to a true or false boolean value, causes the debugger to
halt at the breakpoint or not.
The Modify window also allows us to set breakpoints in specific threads. If we are
running several threads of the same program, we might not want to have the
same breakpoint set in all of them. This window lets us select the thread for
which the breakpoint will be enabled.
When the debugger hits a breakpoint, we have several options to continue with
the program execution (toolbar buttons or Selected menu):
Step into—the debugger steps into the next executable statement and
halts at its first line.
Step over—the debugger steps to the next statement.
Run to return—skip to the end of the method and go back to the point
where it was called (or stepped into).
Resume—the program continues to be executed until it hits the next
breakpoint (or until the end if there are none).
Suspend—halt a running thread (this option is not available when a
thread is stopped at a breakpoint).
Terminate—the execution of the thread is terminated (there is the
option to stop all the threads for the current program).
Run to cursor—the debugger runs the program up to the point where
the cursor has been placed. After setting the cursor, select Selected ->
Run to Cursor.
Figure 17-4 shows the Exceptions pane when sorting the exceptions by
hierarchy.
We set breakpoints in external class files through the Breakpoints pane of the
debugger, selecting Methods -> External .class file breakpoints.
We can select the file from a directory or from a JAR/ZIP file. We can set
breakpoints when entering the methods of the selected class, but if we want to
set breakpoints anywhere in the code, we place the corresponding source file in
the same location as the class file (in the same directory or JAR/ZIP file) or we
specify the location of the source in the debug class path (Figure 17-5).
Inspecting data
We can inspect the values of variables through the Variables and Values panes
of the debugger. The values shown are at the current execution point, so if we
step through the code, we can see the values change. We can also change the
values through the Value pane.
The option Inspect acts in the same way, though it opens a new window
containing all the variable’s data.
To run a piece of code in the scrapbook, we simply copy it in a new page, select
the fragment of code we want to run and hit the Run button. If we also want to
debug it, we click on the Debug button.
It is not possible to set breakpoints in the scrapbook, but we can debug the code
fragment step by step adding the following sentence at the beginning of the
page:
com.ibm.uvm.tools.DebugSupport.halt();
The execution of the halt method causes the debugger to be opened at the first
sentence of the page. Then we can perform the debugging of the code fragment
as any other Java code. The thread associated to the snippet we are executing is
shown in the debugger with the name of the scrapbook page that contains it
(Figure 17-7).
The Debugger has another feature that allows us debugging code fragments: the
Evaluation Area. It lets us run any Java code against the currently selected
object (as if we wrote the code in the Source pane, or in the Value pane if we
have selected the variable this). We can use this feature for example to set or
change parameters related to the current object (for example, the look and feel
for a Swing object, or others as the class path, environment variables).
The way to execute code in the Evaluation Area is similar to the scrapbook: we
select the piece of code and run it (but this time it is related to an object in the
debugger window, for example, this). If we want to trigger the debugger before
executing the code snippet, we include a call to DebugSupport.halt(). Again, the
thread corresponding to the snippet is displayed in the debugger’s Threads
window (Figure 17-8).
Tip: Select Window -> Flip Orientation to get this look of the debugger.
The Distributed Debugger and OLT installation code is included on the product
CD with WebSphere Application Server, VisualAge for Java and WebSphere
Studio. In the examples in this chapter we installed the Distributed Debugger and
OLT code in D:\IBMDebug.
To use OLT in combination with the Distributed Debugger, we select either the
Debug only or the Trace and Debug modes.
Both OLT and the Distributed Debugger allow us to trace and debug local or
remote applications—the remote machine does not have to be running the same
operating system as the client. We now describe how to set up the debugger to
connect to a running Java Virtual Machine.
The first task has to be performed prior to the code deployment and installation in
the server, but the other two are specified as administration options. Because
JSPs are compiled in the application server, specifying the debug option for the
JVM assures that the compiled files contain the debug information needed.
Attention: These extended JVM options are specific to the virtual machine
implementation—in this case we describe the IBM virtual machine that runs on
Windows. You may find that options available on other platforms differ.
WebSphere on Windows sets these options when you enable the debugger.
The Java 2 Platform incorporate a new debugging support with the Java Platform
Debug Architecture (JPDA). JDWP is the interface that communicates between
the debugger’s JVM and the application’s JVM.
The options for -Xrunjdwp that we use in this example are the following:
transport Name of the transport used to connect to the debugger’s JVM
(dt_socket in our case).
server The default value n indicates that the server will attach to the
debug engine at the specified address (or at the automatically
generated address if this parameter is not specified). Specifying a
value of y means that the server will listen for a debug engine to
attach at the address specified.
suspend The value of y (default) indicates that the JVM is suspended before
the main class is loaded. The user then sets a deferred breakpoint
where to stop, and runs the application to that breakpoint. The
value of n indicates that the JVM can proceed with the execution of
the program before the debug engine is attached.
address The port number where the debug engine will attach to the server.
More information about JDPA and its associated interfaces is
available at Sun’s Web site:
http://java.sun.com/products/jpda/readme.html
Figure 17-9 Navigating to the Default Server application server in the console
In the Object Level Trace Service dialog check the box to enable OLT. Enter the
host name of the machine where the OLT server is running in the OLT server
host field—typically this is the machine where you collect the trace and run the
OLT and debugger GUI. The default port number of 2102 should normally not be
changed, unless the port is already being used for another purpose.
► After starting the application server with these options, you have to start the
debugger and attach to the JVM (see “Attaching the debugger to the JVM” on
page 496).
For the users of the IBM distributed debugger we recommend to follow the steps
in “Enabling the Distributed Debugger” on page 479. We did not test debugging
with another JDPA debugger.
Figure 17-13 shows the command line options available when issuing the AEs
startServer command. The options related to OLT and debugging are
highlighted.
Usage:
startServer
[ -configFile <config file name> ]
[ -nodeName <node name> ]
[ -serverName <server name> ]
[ ( -oltEnable | -oltenable ) ]
[ ( -oltPort | -port ) <OLT port number> ]
[ ( -oltHost | -host ) <OLT hostname> ]
[ ( -debugEnable | -debug ) ]
[ -jdwpPort <JDWP port number> ]
[ ( -debugSource | -debug_cp ) <JDWP souce path> ]
[ ( -serverTrace | -traceString ) <server trace string> ]
[ ( -serverTraceFile | -traceFile ) <server trace file name> ]
[ -script [<script file name>]
[ -targetOS <operating system name> ] ]
[ -noExecute ]
[ -usage ]
[ -help ]
[ -verbose ]
This would be the typical usage of this command, accepting the defaults for both
OLT and debugging, with the OLT server code running on the same machine as
AEs—likely to be the case if you are a developer debugging code on your own
machine. Specify the -host option if you run the OLT and debugger user
interface on another machine.
The OLT consists of a GUI client and a trace server. When we configure
WebSphere to use OLT, we specify the hostname and port for the OLT server.
WebSphere connects to the OLT server and sends it application trace events.
More than one application server can connect to the same OLT server at the
same time, enabling the OLT server to collate trace from multiple servers.
The OLT GUI also connects to the server—it takes the collected trace events and
presents them in a graphical display for analysis. We start the OLT GUI
(Figure 17-14) and the local OLT server using this command:
D:\IBMDebug\bin\olt.exe
The debug modes operate OLT in conjunction with the Distributed Debugger, so
that breakpoints set in OLT trigger the debugger, and we can see at the same
time the flow of the application and the possible code errors.
Figure 17-15 shows the OLT client settings (File -> Preferences ) when the client
is installed in the same machine as the OLT server. It is possible to have
distributed configurations where the application server, the application client, the
OLT server and the OLT client are running in different machines, though it is
more common to have the OLT server and client in the same machine.
Each node represents a method call, and the arrows indicate the flow of the
program.
A trace line is an horizontal line connecting events running under the same
execution thread. Each trace line represents one component (servlet, JSP, client
application) running in the application server. Events can be defined as method
calls, return from method calls or start or end of a process.
The main elements of the trace window are shown in Figure 17-17. For more
information about the graphical representation, check the product
documentation.
Host name
The status lines at the bottom of the window provide information identifying the
location of the selected event and the current event:
► The selected event (highlighted green by default) is the last event clicked with
the left mouse button.
► The current event is the event that the mouse pointer is currently positioned
over. As you move your pointer, the current event changes.
The display properties can be changed through File -> Preferences -> OLT ->
Display. This window also allow us to enable Performance Analysis.
Performance Analysis lets us monitor the time between any two calls. By setting
up a maximum time, we can control if the call takes more time than the
established maximum, so that we can detect possible bottlenecks or slowness in
the functions. Figure 17-20 shows how to set the time intervals.
Figure 17-21 Viewing the performance analysis data in the traces window
For remote applications, we specify the fully qualified network name of the
machine where the OLT server is installed.
See details about setting up the application server for OLT and debugging in
section “Enabling debugging support in WebSphere Application Server” on
page 477.
The Breakpoints menu also provides the option to set breakpoints in any object
method that is part of the trace. Figure 17-22 shows an example of how to set a
method breakpoint using this feature.
The option List Method Breakpoints allows us to enable, disable or delete any of
the breakpoints we have set.
With OLT we cannot use more types of breakpoints than the method type, but
more options are available when using the Distributed Debugger (which is more
appropriate to perform in-depth debugging).
Several options are available through a command line startup (Figure 17-23) and
we provide more details on some of the options in the following sections.
Monitor and
Locals pane Source pane
Breakpoints, Packages
and Stacks panes
Source pane
The Source pane displays the source code for the program being debugged.
When we are debugging an application, the debugger searches for the source
code and prompts us to specify the location if it cannot find the source code
(Figure 17-25).
No source code is displayed if we have not compiled our classes including the
debugging information.
It is possible to specify to the Distributed Debugger where to look for the source
code (Figure 17-26).
You can set the source search path either before you launch the program from
the Load Program dialog by clicking on the Advanced button, or while you are
debugging the program from the Source menu.
Tip: You can also specify the debugger source path using the DER_DBG_PATH
environment variable. If you include this variable in your environment you can
avoid having to set the source path each time you restart the debugger. You
must set the variable before starting the debugger.
Breakpoints pane
The Breakpoints pane displays a view of the breakpoints we have set in the
application we are debugging. In this pane we can add new breakpoints or
modify the properties of the existing ones.
We can also see the properties of each breakpoint: if it is enabled, or for which
thread it is enabled (Figure 17-27).
Packages pane
The Packages pane displays a list of the packages used by the application. The
<default> package includes the JSPs.
Stacks pane
The Stacks pane provides a view of the stack in each thread of the program we
are debugging.
Monitors pane
The Monitors pane (Figure 17-29) shows a list of the variables and expressions
that we have selected to monitor. We can enable, disable or delete the monitored
elements through this pane (right button menu). Options concerning all the
monitored elements are available through the Monitors menu.
This pane is useful when we want to monitor global variables throughout the
debugging process. To see the current value of a variable, we suggest to activate
the Tool Tip Evaluation for variables, either in the Source menu or through File ->
Preferences -> Debug (Figure 17-30).
Figure 17-31 Using the tool tip evaluation on the source pane
► The -a option indicates the attachment, and the 0 indicates that it is made to a
JVM.
► If we are attaching to a local JVM (the application server and the debugger’s
daemon and client are in the same machine), we use localhost as the host
name.
► If we are attaching to a remote JVM but the debugger’s daemon and client
are in the same machine, we use the same command, but specifying the
name of the machine where the application is running as the host name.
At this point you can run the application. See “Working with breakpoints” on
page 499 on how to set breakpoints to stop the application. You can also set
deferred breakpoints in classes that have not been loaded yet.
debugger options:
[-help] [-multi] [-qquiet] [-qfilter=<filter file>] [-lang=<lang>]
[-qport=<service port>] [-jvmargs=<args>]
where:
-help help for command
-multi allows connections from multiple front ends
-qquiet suppresses irmtdbgj output
<filter file> file containing the list of packages not to be debugged
<lang> is the console locale (eg. en_US, jp_JP)
<service port> is the port on which the engine will listen (default 8000)
<args> are the arguments passed to the JVM which will run the
application to be debugged
After starting the daemon we start the interface connecting to the daemon
(Figure 17-36). The default port for connecting to the daemon is 8000.
You would also use this command to attach to the WebSphere Application Server
when using the Distributed Debugger without OLT. The qport would be the
address specified in the -Xrunjdwp options (Figure 17-12 on page 481).
Attention: When using the Distributed Debugger and you click the Terminate
button (or select Debug -> Terminate), the program does not stop the current
thread, but the whole JVM—in the case of a WebSphere application this
means the WebSphere Application Server itself is stopped.
To stop debugging and continue with the normal flow of the application, we use
the option Detach Program. We can always reattach to the JVM at any time by
activating a breakpoint or the Step-by-step debug mode in OLT.
It is possible to set line breakpoints by double clicking on the line number in the
Source pane. The breakpoint information is automatically added to the
Breakpoints pane.
When creating a breakpoint in the Line Breakpoint dialog (Breakpoints -> Set
Line or Set Method), you can create a breakpoint in a class that is not yet loaded
by selecting the Defer Breakpoint check box. The breakpoint is enabled when
that DLL or package is loaded, and then it behaves as a normal breakpoint.
When stopping in a breakpoint, the options available for continuing with a step by
step execution are the following:
Step over—to skip over the current statement
Step return or step out—to exit the current method and go back to
the calling method
Only the Step debug option is different from the debug options available with
VisualAge for Java.
If we want to add other classes to the base class list (so that the debugger
doesn’t step into them during the execution), we can add an option to the
DEBUG_OPTIONS when starting the application server (that would be in the
startServer.bat file for AEs or in the JVM settings for the corresponding
application server in AE’s console):
-qfilter=%WAS_HOME%\bin\debug.lst
Where debug.lst is a plain text file containing the base class list. An example is
shown in Figure 17-38.
java.*
javax.*
sun.*
com.sun.*
org.omg.*
org.xml.*
org.w3c.*
com.ibm.som.*
com.ibm.CORBA.*
com.ibm.debug.*
com.ibm.IExtendedNaming.*
com.ibm.IExtendedLifeCycle.*
com.ibm.CBCUtil.*
com.ibm.IManagedClient.*
com.ibm.IManagedCollections.*
com.ibm.ISessions.*
com.ibm.IQueryManagedClient.*
com.ibm.IExtendedQuery.*
com.ibm.ICollectionBase.*
Exceptions
The Distributed Debugger allows us to select from a list of recognized exceptions
that stop the execution of the program if thrown (they can be uncaught
exceptions or exceptions handled in a catch or finally block).
We select the exceptions to be monitored in File -> Preferences -> Debug ->
ProcessName -> Exception Filter Preferences Settings (Figure 17-39).
When the debugger encounters an exception, it displays the exception name and
it points to the code line where it was thrown in the Source pane, if the source
code is available.
For example, if we query a customer number that is not in the database, the
program throws a NullPointerException when trying to access retrieved data
(Figure 17-40).
With Step exception, the debugger stops at the catch block handling the
exception (if any), or at the method that started the JVM thread if it is an
uncaught exception.
Run exception continues the execution of the program, stopping at any catch or
finally blocks (as Step exception), if any. For an uncaught exception, the
debugger itself handles it (writing the data to a command window).
Inspecting data
If we want to keep track of a variable value, the Distributed Debugger gives us
the possibility of adding the variable to the Monitors pane (see Figure 17-29 on
page 495), so that we can keep control of its value during the execution process.
We can add variables to the Monitors pane using the Monitors menu, or by
double clicking on the selected variable declaration (we have to enable this
option in the Preferences window: Debug -> Add to program monitor on double
click). We can change the values of these monitored applications in the Monitors
pane (in a similar way to other non-monitored variables in the Locals pane).
The tools.jar file contains the JPDA classes and is passed using the
-Xbootclasspath option.
Other JVM options for the program, such as class path entries, can be included
with the former. When executing the program, we get this output in the command
line:
Listening for transport dt_socket at address: 2121
-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=portnumber
Once we have this port number, we can connect to the running JVM from the
debugger’s interface or from the command line. An example of connecting
through the interface is shown in Figure 17-44.
To perform the attachment through the command line we use the command
shown in Figure 17-45.
For remote attachment, we start the debugger daemon listening on the JVM port,
as we have described previously (Figure 17-46).
The hostname is the TCP/IP name or address of the machine where the
debugger daemon is running (if all the three hosts are the same, then we perform
local debugging, so we use the commands described before for this purpose).
If we are using WAS 4.0, we must use the launchClient command line program
to launch our J2EE client application, instead of doing it directly by the Java
runtime. We can find this utility program in
D:\WebSphere\AppServer\bin\launchclient.bat
@echo off
REM Usage: launchClient [<ear-file> | -help | -?]
setlocal
call "%~dp0setupCmdLine.bat"
set NAMING_FACTORY=com.ibm.websphere.naming.WsnInitialContextFactory
endlocal
Figure 17-48 Editing the launchClient.bat file to include JVM port information
Then, when we launch the application, we get the same message about the port
number in the command line (Figure 17-49).
D:\WebSphereSSE\AppServer\bin>launchClient
..\installedapps\piggybank-swing.ear -CCjar=pb-swingclient.jar
Listening for transport dt_socket at address: 2363
IBM WebSphere Application Server, Release 4.0
J2EE Application Client Tool, Version 1.0
Copyright IBM Corp., 1997-2001
We take the port information and attach to the JVM locally or remotely in the
same way as before. Then we are ready for debugging.
In the case of WAS 4.0, it is a requirement that we assemble the Web module
before publishing to the server (see Chapter 10, “Development using WebSphere
Studio” on page 237 for details about publishing Web archive files, and
Chapter 15, “Assembling the application” on page 389 for general details about
assembling the application modules).
The first thing we have to do in Studio prior to compiling and publishing the Java
files is setting the debug server for each publishing stage (or for the stage where
the debuggable code is). Figure 17-51 shows an example of setting up a debug
server for the Test stage.
The Non-debug Publish option compiles and publishes the selected files without
debug information. Query Server Status is used to verify that the server is
running. We can start the debug server directly enabling OLT and the debug
mode, if we have selected this options when setting up the debug server for the
current stage.
When using WAS 4.0 as the application server, we can test the just-published
files only if we assemble them in a Web archive file, or if we are using this Studio
feature to replace old files already deployed in the server.
For example, consider the case where we have already assembled and
deployed the PiggyBank application in WAS. Then, by performing some unit
tests, we discover that several JSPs have to be modified. We can do this in
Studio and then publish them directly on the debug server (though in this case it
is not necessary to compile the JSPs, because the application server compiles a
JSP the next time the JSP is invoked).
We can set up different debug servers depending on the stage the application is
in (using WAS 4.0 Advanced Edition, which allows to have several servers, but
not for Single Server Edition, where we can only start one server per machine).
We now illustrate the different methods and their advantages and disadvantages.
We then launch the WebSphere Test Environment (WTE). There are a number of
options related to JSPs and we can perform the debugging tasks in several ways
combining these options.
Later, if desired, we can import the code to the workbench to debug it.
When we are performing exhaustive debugging of the JSPs, changing the code
frequently to fix errors or to improve details, importing the generated servlet each
time increases the VisualAge for Java repository unnecessarily.
The code shown in the debugger window is the generated servlet code
(Figure 17-53). We can set breakpoints and execute step by step as we would do
with any Java class, but the Java code is stored outside of the repository.
The generated servlet code for the JSP can be found in the Workbench in the JSP
Page Compile Generated Code project(Figure 17-54).
The generated servlet reads the HTML content of the JSP from a file stored
under the generated code directory
...\IBM WebSphere Test Environment\temp\JSP1_1\default_app\etc\
and sends it back to the browser combined with the appropriate dynamic data.
However, debugging the JSP using the generated code means that we cannot
see the HTML output (which might be interesting in some cases), and the
generated servlet code is difficult to read, except for very simple JSPs.
We can fix small errors in the generated Java servlet for debugging purposes,
but the errors must really be fixed in the JSP source code, and then recompiled.
If we want to check the syntax errors directly in the JSP code, we can use the
JSP Execution Monitor.
In the WTE window, the option Enable monitoring JSP execution launches the
tool when a JSP file is called in the browser. The option Retrieve syntax error
information highlights syntax errors in the JSP source code in the monitoring tool
window.
The monitor displays the JSP, the generated servlet code, and the HTML output
(Figure 17-55). It is possible to change the display options in the View menu.
JSP source
code
Generated
servlet code
Output for
the browser
To set breakpoints in external files (for example, if we want to debug Java code
that is not in the repository: from third parties or externally generated servlets):
► In the breakpoints tab of the debugger window, select Methods -> External
.class files breakpoints, select the class file (from a directory or a JAR file),
and then either Set breakpoints in source or Break on method enter.
► In the case of externally generated servlets, it is more useful to set the
breakpoints directly in the JSP source code (or using the JSP Execution
Monitor) if we do not want to import the code.
With the Distributed Debugger we can debug JSPs as well as any other Java
code included in our Studio project. The steps to set up the debug server have
been described before in “Debugging WebSphere Studio code” on page 508, so
we do not repeat them here.
Let’s suppose we have already published the JSPs to the server, assembled the
application and deployed it. We can perform a first test on the JSPs by compiling
them in the Server at deployment time or in Studio, before publishing. To include
debug information in the compiled classes, we have to setup the Java Virtual
Machine in the application server to be launched with the debug option (see
“Enabling debugging support in WebSphere Application Server” on page 477).
After starting the server, we launch OLT (locally or remotely) and set breakpoints
to begin the debug task.
To control the execution, the two options available are Step Debug (it does the
same as Step Over for JSPs) and Run. The JSPs variables are not visible in the
Locals pane.
When we attempt to enter a JSP tag, the debugger displays whatever debug
information was generated by the JSP processor. Variables declared in Java
code blocks are accessible and can be added to the Monitors pane.
Because the Java servlet is generated for a JSP, debugging is not quite as easy
as for hand-written servlets. It takes some insight to understand the generated
methods and Java code, and how that code relates to the original HTML code
and the JSP tags. The HTML code is output as text constants that are created
from the JSP source code.
We then introduce JUnit, an open source framework for creating and running unit
tests, and describe how it can be incorporated into the development process
using examples from our PiggyBank application.
Unit tests, however, are informal tests that are generally executed by the
developers of the application code. They are often quite low-level in nature, and
test the behavior of individual software components such as individual Java
classes, servlets or EJBs.
Because unit tests are usually written and performed by the application
developer, they are often “white-box” in nature, that is to say they are written
using knowledge about the implementation in mind, to test specific code paths,
for example. This is not to say all unit tests have to be written this way—one
common practice is to write the unit tests for a component based on the
component specification before developing the component itself. Both
approaches are valid—when defining your own unit testing policy you may want
to make use of both.
In general the simple answer is because it is too hard, and because nobody
forces them to. Writing an effective set of unit tests for a component is not a trivial
undertaking. Given the pressure to deliver that many developers find themselves
subjected to, the temptation to postpone the creation and execution of unit tests
in favour of delivering code fixes or new functionality is often overwhelming.
We recommend that you take the time to define a unit testing strategy for your
own development projects. A simple set of guidelines and a framework that
makes it easy to develop and execute tests will pay for itself surprisingly quickly.
Tests are easier to write, because a lot of the infrastructure code that you require
to support every test is already available. A testing framework also provides a
facility that makes it easier to run and re-run tests, perhaps via a GUI. The more
often a developer runs tests, the quicker problems can be located and fixed,
because the delta between the code that last passed a unit test and the code that
fails the test is smaller.
JUnit
JUnit is an open source testing framework that is used to develop and execute
unit tests in Java. It was written by Erich Gamma, one of the “Gang of Four” who
wrote the classic book Design Patterns, and Kent Beck, who has also written
extensively about object development and first described the eXtreme
Programming (XP) software development process.
A good starting point for finding information about JUnit on the Web is the JUnit
Web site:
http://www.junit.org/
This site contains documentation and links, as well as a free download that
includes both the JUnit source and compiled code.
The rest of this chapter describes how we used JUnit to create and run unit tests
for components of our example PiggyBank application, including a discussion
about how to test Enterprise JavaBean (EJB) components. We also demonstrate
how we can use Ant (described in “Using Ant to build a WebSphere application”
on page 197) to automate the execution of test cases in a WebSphere
environment.
While we briefly explain the JUnit features we take advantage of, we do not
attempt to provide a comprehensive description of all of JUnit’s features—we
recommend you consult the documentation included in the JUnit distribution.
The examples we describe in this section are included in the junit subdirectory
of the additional Web material for this redbook. Appendix A, “Additional material”
on page 557 describes how to obtain the Web material.
The compiled JUnit code is located in the archive junit.jar, in the base
directory. The source code is also available in the archive src.jar in the same
location. Documentation is included in the package in the doc and javadoc
directories.
We decided to import the JUnit source code into our workspace in order to
examine the code and allow us to step through it in the debugger if necessary.
We created a new project named JUnit, and imported the source code into the
new project from the JAR src.jar using the VisualAge File > Import menu
option. We then versioned the project with the version name 3.7 to reflect the
JUnit version number.
There is a more detailed discussion about how to integrate JUnit into VisualAge
for Java on the JUnit Web site:
http://www.junit.org/junit/doc/vaj/vaj.htm
When we developed this chapter the latest version of VisualAge for Java
described on this page was Version 3.5—the information is also applicable to
Version 4.0 however.
We create one test case class for each component we want to test, that is for
each Java class or EJB. We place the test cases in a separate package from the
tested components, created by appending the name tests to the package name:
itso.was4ad.webapp.view <=== component
itso.was4ad.webapp.view.tests <=== test cases for component
Test suites
Collections of test cases can be organized into test suites, managed by the
junit.framework.TestSuite class. JUnit provides tools that allow every test in a
suite to be run in turn and report on the results.
We create an additional AllTests test case class in each package containing test
cases. This class defines a suite method that creates and returns a TestSuite
comprising all of the test cases in the package.
We can use this hierarchy to select the tests we want to run, whether a single
test case, or all of the test cases for a package, a module, or the entire
PiggyBank application.
Expected behavior
This class is intended for use in JSPs—it manages an array of AccountData
objects, typically obtained as a result of a call into the use case layer of the
PiggyBank application.
The class is intended to allow a JSP to use the bean to iterate through each
account in the list using the WebSphere tsx:repeat tag, extracting the
information about the accounts in the list in turn without needing to code any
explicit Java in the page. The page iterates through the elements in the array by
invoking the getNext method, which can be called using the standard
jsp:getProperty tag. When the end of the list is reached, the class throws an
ArrayIndexOutOfBoundsException, which signals to the tsx:repeat code that the
loop should be terminated.
The current item in the list can be reset to the beginning using the reset method.
This is useful where the same data may need to be included in a page twice, for
example while building selection boxes in forms.
In addition to supporting iteration, the bean also performs formatting of the data
for the Web channel, adding a currency symbol to the account balance and
converting the boolean value indicating whether the account is a checking
account into a string representing the account type, checking or savings.
The second two tests in the list are common error situations that we foresee. We
could also choose to test the formatting of data returned by the bean. We
decided not to, however, because we know that the class uses the AccountView
bean to format the data—we implement the formatting tests in the test case for
that class.
At this stage we believe that this set of tests is adequate—of course there is
nothing preventing us from adding more tests later if we decide we need them.
package itso.was4ad.webapp.view.tests;
import itso.was4ad.data.*;
import itso.was4ad.webapp.view.*;
import junit.framework.*;
/**
* JUnit tests for the AccountListView class
*/
public class AccountListViewTests extends TestCase {
/**
* AccountListViewTests constructor
* @param name java.lang.String
*/
public AccountListViewTests(String name) {
super(name);
}
}
The setUp method creates an array in an instance variable data—we add the
variable to the class as follows:
AccountData[] data = null; // Data used by the tests
Testing iteration
Our first test is designed to exercise the iteration behavior—this is, after all, the
primary purpose of the class. We create a new method named testIteration,
shown in Figure 18-6.
First of all we create a new instance of the AccountListView class, using the data
prepared by the setUp method. There are ten accounts in this array—we
therefore expect to iterate through the accounts ten times, which we do in a for
loop. We expect the accounts to be returned in the same order that they are
specified in the original array, so we check them using the assertEquals method.
Note: Every get method in our view class returns a String formatted for
insertion into a JSP. We must convert the account number we expect into a
String in order to perform the comparison with the value from the view bean.
The assertEquals and fail methods are provided by the JUnit framework. JUnit
provides a number of methods that can be used to assert conditions and fail a
test if the condition is not met. These methods are inherited from the class
junit.framework.Assert, via TestCase, and are summarized in Table 18-1.
All of these methods include an optional String parameter that allows the writer
of a test to provide a brief explanation of why the test failed—this message is
reported along with the failure when the test is executed.
assertSame Assert that two objects refer to the same object. Compares
using ==.
Testing reset
The next test tests the reset behavior. It also uses the array created by the setUp
method, using the data instance variable to create an instance of our
AccountListView class. It then iterates part-way through the list, checking the
account numbers on the way, just to make sure we aren’t stuck on the first item.
We then invoke the reset method, and assert that the next account number
returned by the bean is the number of the first account in the list. If this is the
case the test is complete and we exit the test method normally.
/**
* Test the reset method
*/
public void testReset() {
AccountListView list = new AccountListView(data);
// Now reset the view and make sure we're back at the beginning
list.reset();
list.getNext();
assertEquals("Reset incorrect", "1000", list.getNumber());
}
This could happen if a user attempts to access a JSP page directly, instead of via
the appropriate servlet, for example. Under these circumstances we would like
the page to behave gracefully, rather than fail with a NullPointerException. The
code for the testDefaultConstructor method is shown in Figure 18-8.
/**
* Test the behavior with the default constructor
*/
public void testDefaultConstructor() {
AccountListView list = new AccountListView();
try {
list.getNext();
list.getCustomerID();
fail("Expected ArrayIndexOutOfBoundsException");
} catch (ArrayIndexOutOfBoundsException e) {
// expected
}
}
We expect the class to allow an instance of the bean to be created, but to throw
an ArrayIndexOutOfBounds exception when an attempt is made to use it—this
immediately terminates any enclosing tsx:repeat loop.
Our test code creates a new instance of the class using the default constructor,
then attempts to use it. If we catch the expected exception, all is well. If we do not
catch the exception, we cause the test to fail by invoking the JUnit fail method.
The behavior under these circumstances is undefined by our design—we did not
consider this sequence of events until we started thinking about how to break our
class. This usefully illustrates one of the benefits of our unit testing
strategy—sooner or later a JSP developer is likely to make this mistake but at
least now we know we can cope with it.
/**
* Test the behavior without an initial getNext()
*/
public void testNoGetNext() {
AccountListView list = new AccountListView(data);
try {
list.getCustomerID();
fail("Expected ArrayIndexOutOfBoundsException");
} catch (ArrayIndexOutOfBoundsException e) {
// expected
}
}
If you are using Ant to build the application as described in “Using Ant to build a
WebSphere application” on page 197, for example, you need to add the JUnit
JAR file to the class path specified in the Web application build.xml build file, as
Figure 18-10 illustrates.
<path id="webapp.classpath">
<pathelement location="${global.was.dir}/lib/j2ee.jar"/>
<pathelement location="${global.junit.jar}"/>
<pathelement path="${global.build.dir}/common"/>
<pathelement path="${global.build.dir}/usecase"/>
</path>
Figure 18-10 Updating the Web application build file to build the test case class
global.junit.dir=D:/junit3.7
global.junit.jarfile=junit.jar
global.junit.jar=${global.junit.dir}/${global.junit.jarfile}
Figure 18-11 Updating the Ant global.properties file to specify JUnit file locations
If you are developing using VisualAge for Java and imported the JUnit code into
the workspace as described in “Installing JUnit in VisualAge for Java” on
page 521, you do not have to make any further changes in order to compile the
code, because VisualAge for Java locates the JUnit classes in the workspace.
Before we can run either tool, we must first make sure that all the classes we
need are on our class path. In this case, in order to run the tests in the class
AccountListViewTest we need the JUnit code plus the PiggyBank Web
application and common code in our class path.
To run the tools from VisualAge for Java, we must add the projects containing the
code we want to test to the class path for each tool—we can do this by locating
the tool runner class in the VisualAge GUI and selecting Properties from the
context menu of the class. We then select the Class Path tab in the properties
dialog. We then click the Edit button and select the appropriate project
(Figure 18-12).
D:\itso4ad\dev\src>java junit.textui.TestRunner
itso.was4ad.webapp.view.tests.AccountListViewTests
....
Time: 0.03
OK (4 tests)
Figure 18-13 Running the text-based test runner from the command line
Each dot (.) output by the tool represents the start of a test. We have four tests in
our test case, so there are four dots. Once all the tests are complete the test
runner tells us how long they took, and summarizes the results—in this case all
of our tests were successful.
When we execute the Swing test runner the GUI is displayed and the tests in our
test case executed.
Progress /
Status Bar
Summary
In Figure 18-14 we can see the results of our tests on the AccountListView class.
The progress bar is green, which indicates that all of the tests ran
successfully—any failures would have been indicated by a red bar. This
observation is confirmed by the result summary—it reports that of the four tests
run, none failed, and none resulted in an error.
Note: The JUnit test runner is also able to reload classes—it achieves this
using a specialized class loader. While this works well for simple tests, as we
will see later on, we are unable to use this facility when testing code running in
a WebSphere container, because WebSphere uses its own specialized class
loaders.
Failed tests
So far we have not seen any failed tests, although we can assure you this was
not the case when we were initially developing this chapter. At this point we
introduce an intentional defect into our code in order to demonstrate this
scenario.
/**
* AccountListView default constructor
*/
public AccountListView() {
//this(new AccountData[0]);
}
We recompile this class—in VisualAge this simply involves saving the modified
method—and re-run our tests.
The output from the text-based test runner is shown in Figure 18-16. It still shows
four dots, because four tests were attempted. After the last dot however, we see
an E. This character represents an error—if the test had been failed by one of the
methods from Assert we would have seen F instead.
After the tests complete we get a summary of the failures—in this case there is
just one—it tells us there was a NullPointerException when we tested the
default constructor.
FAILURES!!!
Tests run: 4, Failures: 0, Errors: 1
Figure 18-17 shows the same failure in the Swing-based GUI—the progress bar
is red (this can be seen more clearly in the PDF version of this book), and the
summary reports a single error. The description panel at the bottom of the
window shows the NullPointerException relating to the failure highlighted in the
center panel.
Figure 18-18 Displaying the test hierarchy in the Swing-based test runner
import junit.framework.*;
/**
* Runs all of the tests in this package
*/
public class AllTests extends TestCase {
/**
* AllTests constructor
* @param name java.lang.String
*/
public AllTests(String name) {
super(name);
}
}
The next step is to define a suite method that returns a TestSuite object
containing all of the tests in the suite. We create a new instance of TestSuite,
providing a descriptive name for the suite in the constructor (Figure 18-20).
/**
* Returns a test suite containing all tests in this package
* @return junit.framework.Test
*/
public static Test suite() {
TestSuite suite = new TestSuite("All web application view tests");
We then add tests to the suite using the addTestSuite method. This method
takes a Class object as a parameter and adds all the methods in the class with
names that begin with test to the suite—this is the reason why we followed this
convention when we created the methods.
We also have tests that exercise the CustomerView and AccountView classes in
the same package. The tests in all three test case classes are added to the test
suite, which is then returned as the result of the suite method. Any new test
cases we create in this package are added to the suite by adding a
corresponding addTestSuite call to the suite method.
For convenience we also create a main method in the AllTests class. This is a
simple shortcut that allows us to execute the tests by invoking the class directly,
rather than passing the class name to a test runner (Figure 18-21).
/**
* Run this test suite
* @param args java.lang.String[]
*/
public static void main(String[] args) {
junit.textui.TestRunner.run(suite());
}
D:\itso4ad\dev\src>java itso.was4ad.webapp.view.tests.AllTests
........
Time: 0.03
OK (8 tests)
Figure 18-22 Using the main method to run the test suite
We can also run the tests in the Swing GUI by issuing the command:
java junit.swingui.TestRunner itso.was4ad.webapp.view.tests.AllTests
Figure 18-23 Running a test suite in the Swing GUI test runner
The classes are almost identical, however instead of adding tests to the suite
using addTestSuite, they use the addTest method (remember TestSuite
implements the Test interface), passing the output of each AllTests.suite
method as a parameter, for example:
suite.addTest(itso.was4ad.webapp.view.tests.AllTests.suite());
package itso.was4ad.ejb.account.tests;
import junit.framework.*;
import itso.was4ad.helpers.HomeHelper;
import itso.was4ad.ejb.account.*;
import itso.was4ad.data.*;
import itso.was4ad.exception.*;
import javax.naming.*;
import javax.ejb.*;
import javax.rmi.*;
import java.rmi.*;
import java.util.*;
/**
* JUnit tests for the Account EJB
*/
public class AccountTests extends TestCase {
private static final String ACCOUNT_HOME = "java:comp/env/ejb/Account";
/**
* AccountTests constructor
* @param name java.lang.String
*/
public AccountTests(String name) {
super(name);
}
}
This is not the only choice however—we could, for example, test the class from
the container’s perspective, where the test creates an instance of the EJB
implementation class directly and invokes EJB life-cycle methods such as
ejbActivate directly.
This requires a little more effort but may pay dividends. With CMP beans you
could eliminate the need for a persistent store completely, setting instance
variables yourself using the Java reflection API. With bean-managed persistent
(BMP) beans on the other hand you may want to exercise the persistence code
more thoroughly, running tests that check the data written to the persistent store
directly instead of through the EJB.
Because all our tests have to locate the account EJB home, we created a simple
helper method in our test case class that test methods can use to easily obtain
the home interface. The code for the getAccountHome method is shown in
Figure 18-25.
/**
* Helper method that locates the Account EJB's home interface
* @return itso.was4ad.ejb.account.AccountHome
* @exception javax.naming.NamingException
*/
private static AccountHome getAccountHome() throws NamingException {
InitialContext context = new InitialContext();
return (AccountHome) PortableRemoteObject.narrow(
context.lookup(ACCOUNT_HOME),
AccountHome.class);
}
/**
* Set up some data used by the tests
*/
public void setUp() throws
NamingException, CreateException, RemoteException, InvalidOperation {
We need to take extra care with the tear-down method—it needs to be robust
enough to cope with anything the tests we write might do to the data. We must
take care to ensure that the test data is always removed—if for some reason an
account is not removed the next test will fail as our set-up method, which is
admittedly not particularly robust, will throw a DuplicateKeyException before we
even get to run the test.
Our solution (Figure 18-27), is to attempt to remove any account with a number
in the range from 1000 to 1009. This includes the accounts we create in the
set-up method, but also allows for tests to create their own accounts. If an
account does not exist, we simply ignore it and carry on, because tests may also
remove accounts.
Finally, we must make sure the accounts are empty before we remove them,
because the application enforces a business rule that does not allow us to
remove an account that has a balance.
Because we are testing an entity EJB we must test both the home and remote
interfaces. The tests we perform on the home interface are listed in Table 18-2.
Table 18-2 Methods testing the Account EJB home interface
Test method Description
testDebitOverdraw Test a debit that takes more money from the account than is
available—our simple application does not provide an
overdraft facility
The individual test methods do not introduce any new JUnit features, so we only
include a single example here, testDebitOverdraw (Figure 18-28).
/**
* Test debit of more money than is in the account
*/
public void testDebitOverdraw() throws NamingException, FinderException,
RemoteException, BusinessException {
// Locate one of the previously set up accounts
AccountKey key = new AccountKey(1004);
Account account = getAccountHome().findByPrimaryKey(key);
Because the JUnit test runners are standalone Java programs, the easiest way
to do this is to create a new J2EE client module and package the tests into it. We
can then run the tests using the WebSphere client container launchClient. We
chose to achieve this by updating the main method of our top-level AllTests class
to start the test runner for us (Figure 18-29).
/**
* Run all the PiggyBank tests
* Usage: itso.was4ad.tests.AllTests [-gui]
* @param args java.lang.String[]
*/
public static void main(String[] args) {
// Check the command line args
if (args.length > 0 && args[0].equals("-gui")) {
// Run the GUI tool - tell it to use the system classloader
junit.swingui.TestRunner runner = new junit.swingui.TestRunner();
runner.setLoading(false);
runner.start(new String[] {AllTests.class.getName()});
} else {
// Run the text version
junit.textui.TestRunner.run(suite());
}
}
We use AAT to create a new client module, specifying the top-level AllTests
class as the main class—this is the only class we want to include in the module,
because it does not belong to any one component. We also define the local EJB
JNDI reference used to locate the account EJB ,and bind it to the same global
JNDI name specified in the bindings for our EJB module.
For simplicity we add the test client module to our single application EAR file.
This allows us to use the class path defined in the test module’s JAR manifest to
include the common, use case, and EJB JAR files in the test client’s class path.
This approach does not work for the Web application, however, because the Web
application classes are located in WEB-INF/classes in the WAR file. We worked
around this by explicitly including all of the Web application classes in the test
client module JAR file. This is less than ideal because now we have two copies of
these classes in our EAR archive. It does not affect the execution of the actual
application code, however, so we decided to live with the situation.
The final EAR file and the code used to create it are included in the additional
material, described in Appendix A, “Additional material” on page 557.
Before we can run the tests, however, we must deploy the application into an
application server and start it. This is described in Chapter 16, “Deploying to the
test environment” on page 431.
Once the application is up and running in WebSphere, we are finally able to start
our tests. We invoke the test runner using the WebSphere launchClient
command. There are now two client modules in the EAR file—the standalone
PiggyBank Swing client and our test module.
If you simply run the standard launchClient command without specifying which
client you want to run, WebSphere appears to always execute the one that is
described first in the enterprise application’s deployment descriptor.
This command runs all of our unit tests using the Swing GUI (Figure 18-30).
Figure 18-30 Running all PiggyBank unit tests in the Swing GUI
As you can see, two of our tests failed—it seems our PiggyBank application is
quite happy to allow bank accounts to be credited and debited negative amounts.
Fortunately this is just an example application, so in time-honored fashion, we
leave the correction of our code as an exercise for the reader.
We can re-run all the tests by clicking the uppermost Run button. If we simply
want to rerun one of the failing tests, we select the test in the center panel and
click the lower Run button. This gives us the opportunity to diagnose the problem
using the configurable logging mechanism described in “Message logging” on
page 330 to dynamically turn on debug messages for the offending component.
In this section we describe how we can extend the Ant build scripts we
developed in “Using Ant to build a WebSphere application” on page 197 to
automatically install the application into WebSphere AEs, start the application
server, and run our unit tests. We chose AEs for these examples because AEs is
ideal for use as a unit test environment on a developer’s desktop machine—the
scripts we develop can be used during day-to-day development as well as by the
automatic daily build process.
All of the code we developed for this example is available in the additional
material—see Appendix A, “Additional material” on page 557.
First we update the local properties and path—the class path for the compile
must include all of our subprojects because the AllTests class refers to them all
in order to build a suite of all tests. The changes are shown in Figure 18-31.
Figure 18-31 Local properties and path from the test module build file
The only other significant change we have to make is to the package target. We
must create a JAR file that includes not just the AllTests class and client
meta-data, but also all the compiled classes from the Web application, as
described in “Packaging the tests” on page 545. The resulting package target is
shown in Figure 18-32.
Figure 18-32 Package target from the test module build file
We already have targets that create and install an EAR file, as described in
“Packaging the EAR file” on page 230 and “Installing the EAR file” on page 234.
The start target depends upon the stop, package and install targets—by
executing this target we first stop WebSphere if it is running, rebuild the
application EAR file if necessary, and install it into the application server. Only
then do we actually start the server. This sequence allows a developer to refresh
the unit test environment with the latest code by issuing a single Ant command.
The stop target is very similar, although it does not depend on any targets other
than the standard init target. It uses the WebSphere stopServer command to
stop the application server. An example of the output generated by this target is
shown in Figure 18-35
stop:
[echo] Stopping WebSphere AEs
[exec] IBM WebSphere Application Server
[exec] Command Line Runtime Utility Program
[exec] Copyright (C) IBM Corporation, 2001
[exec]
[exec] Loading configuration from file.
[exec] Using the specified configuration file:
[exec] D:\WebSphere\AppServer\config\server-cfg.xml
[exec] The diagnostic host name was read as "localhost".
[exec] The diagnostic port was read as "7000".
[exec] Issuing command to stop server.
[exec] The stop server command completed successfully.
[exec] Examine the server log files to verify that the server has stopped.
[echo] WebSphere AEs stopped
Note that we did not make either of the targets that execute the tests depend
upon the start target. There are two primary reasons for this:
► A developer using the JUnit tests to reproduce a problem may not necessarily
want to restart WebSphere every time to re-run the tests.
► The startServer command is asynchronous—when the command completes
successfully, the start of the application server is not complete. If we attempt
to run tests immediately after the start target completes the tests may fail
since the application may not have completed starting up.
The output from the test target is shown in Figure 18-37. As you can see we still
have to fix the debit and credit problem with negative amounts.
init:
[echo] Build of itso4ad started at 2134 on July 19 2001
test:
[echo] Running JUnit tests
[exec] IBM WebSphere Application Server, Release 4.0
[exec] J2EE Application Client Tool, Version 1.0
[exec] Copyright IBM Corp., 1997-2001
[exec]
[exec] WSCL0012I: Processing command line arguments.
[exec] WSCL0013I: Initializing the J2EE Application Client Environment.
[exec] WSCL0035I: Initialization of the J2EE Application Client Environment
has completed.
[exec] WSCL0014I: Invoking the Application Client class itso.was4ad.tests.AllTests
[exec] ...........F.F......
[exec] Time: 12.237
[exec] There were 2 failures:
[exec] 1) testNegativeDebit(itso.was4ad.ejb.account.tests.AccountTests)
junit.framework.AssertionFailedError: Shouldn't allow negative debit
[exec] at itso.was4ad.ejb.account.tests.AccountTests.testNegativeDebit(
AccountTests.java:196)
[exec] at itso.was4ad.tests.AllTests.main(AllTests.java:29)
[exec] at com.ibm.websphere.client.applicationclient.launchClient.
createContainerAndLaunchApp(launchClient.java:430)
[exec] at com.ibm.websphere.client.applicationclient.launchClient.main(
launchClient.java:288)
[exec] at com.ibm.ws.bootstrap.WSLauncher.main(WSLauncher.java:63)
[exec] 2) testNegativeCredit(itso.was4ad.ejb.account.tests.AccountTests)
junit.framework.AssertionFailedError: Shouldn't allow negative credit
[exec] at itso.was4ad.ejb.account.tests.AccountTests.testNegativeCredit
(AccountTests.java:180)
[exec] at itso.was4ad.tests.AllTests.main(AllTests.java:29)
[exec] at com.ibm.websphere.client.applicationclient.launchClient.
createContainerAndLaunchApp(launchClient.java:430)
[exec] at com.ibm.websphere.client.applicationclient.launchClient.main(
launchClient.java:288)
[exec] at com.ibm.ws.bootstrap.WSLauncher.main(WSLauncher.java:63)
[exec]
[exec] FAILURES!!!
[exec] Tests run: 18, Failures: 2, Errors: 0
[exec]
[echo] JUnit test complete
BUILD SUCCESSFUL
Part 5 Appendixes
Select the Additional materials and open the directory that corresponds with
the redbook form number, SG246134.
The ant subdirectory contains Ant build files and the source code to build the
basic PiggyBank application, as described in Chapter 9, “Development using the
Java 2 Software Development Kit” on page 183. It also contains generated
javadoc documentation and modules for the application.
The log4j subdirectory contains a single source file implementing the Log4J
version of the PiggyBank log wrapper class discussed in “Using Log4J” on
page 354.
The junit subdirectory contains Ant build files and the source code to build the
JUnit version of the application described in Chapter 18, “Automating unit testing
using JUnit” on page 517.
The struts subdirectory contains Ant build files and the source code to build the
Struts version of the PiggyBank application, as described in “Jakarta Struts” on
page 284.
The wsbcc subdirectory contains the source code for the PiggyBank WSBCC
example discussed in “WebSphere Business Components Composer” on
page 303.
The wte subdirectory contains the Web application HTML and JSP files for the
basic, Struts, and WSBCC versions of the PiggyBank for testing in VisualAge for
Java. This includes .webapp files and a default.servlet_engine file for
configuration of the servlet engine with multiple Web applications. Note that
some files may be out of date as compared to the latest Java source code in the
Struts and WSBCC samples.
Tip: Some chapters refer to an ITSO4AD directory for the sample code. You
have to copy relevant portions of the sample code to such a directory to match
the description in the chapters.
The EAR file should install into either WebSphere AE or AEs. Once the
PiggyBank is installed and started, you can access the application as follows:
► Access the Web application by opening a Web browser on the virtual host
name you used to install the application, for example:
http://localhost:9080/
► Start the standalone Swing client by issuing the command:
launchclient <path to piggybank.ear>\piggybank.ear
You can use the Swing client to enter data for the application to use, or enter
sample data by hand using SQL commands.
In the dialog select all four versions of the PiggyBank project (Figure 18-38) and
click OK. Finally uncheck Add most recent edition to workspace and click Finish
to import the example code into your workspace.
To work with one of the PiggyBank versions, add the appropriate project version
to your workspace. Each project version has a comment describing the contents
of the project and listing dependencies on third party software not supplied with
the additional material.
The WSBCC sample code is also included in the repository—import the package
into your repository in the same way, selecting Packages instead of Projects.
Attention: The PiggyBank example EJBs use J2EE features not supported by
VisualAge for Java. As a result, you cannot test the PiggyBank EJB code
inside VisualAge without modifying it. This can be achieved without affecting
any other components, however. See “Developing EJBs in VisualAge for Java”
on page 266 for information on developing EJBs to version 1.1 of the EJB
specification in VisualAge for Java.
If you want to build the JUnit, Struts, or Log4J samples, you must also obtain the
appropriate third party code, as described in the relevant sections.
You must also install WebSphere Application Server, either Advanced Edition
(AE) or Advanced Edition, Single Server (AEs).
Once you have downloaded and installed the software you need, you must edit
the global.properties file (located in the src directory) to specify the locations
of the components Ant requires to build the application.
To build the code, open a command window and change to the src directory. To
rebuild the entire application issue the command:
ant clean package
If you have WebSphere AEs installed, you can build, install and start the
PiggyBank application using the command:
ant start
For more information refer to the appropriate chapter for the component you are
working with.
Note: The Ant examples include database schema and map files for DB2
UDB version 7. If you want to use another database you must create new
schema and map files—see “EJB deployment tool” on page 418.
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see “How to get IBM Redbooks”
on page 567.
► Programming J2EE APIs with WebSphere Advanced, SG24-6124
► Enterprise JavaBeans for z/OS and OS/390 CICS Transaction Server V2.1,
SG24-6284
► EJB Development with VisualAge for Java for WebSphere Application Server,
SG24-6144
► Design and Implement Servlets, JSPs, and EJBs for IBM WebSphere
Application Server, SG24-5754
► Programming with VisualAge for Java Version 3.5, SG24-5264
► WebSphere V3.5 Handbook, SG24-6161
► Version 3.5 Self Study Guide: VisualAge for Java and WebSphere Studio,
SG24-6136
► How about Version 3.5? VisualAge for Java and WebSphere Studio Provide
Great New Function, SG24-6131
► Servlet and JSP Programming with IBM WebSphere Studio and VisualAge for
Java, SG24-5755
► Revealed! Architecting Web Access to CICS, SG24-5466
► IMS Version 7 and Java Application Programming, SG24-6123
► Migrating WebLogic Applications to WebSphere Advanced Edition,
SG24-5956
► WebSphere Personalization Solutions Guide, SG24-6214
► User-to-Business Patterns Using WebSphere Advanced and MQSI: Patterns
for e-business Series, SG24-6160
► WebSphere Scalability: WLM and Clustering Using WebSphere Application
Server Advanced Edition, SG24-6153
Other resources
These publications are also relevant as further information sources:
► Enterprise Java Programming with IBM WebSphere. Kyle Brown, et al.
Addison-Wesley Professional, May 2001. ISBN: 0201616173
► Design Patterns: Elements of Reusable Object-Oriented Software. Erich
Gamma, et al. Addison-Wesley Publishing Company, January 1995.
ISBN: 0201633612
► Patterns In Java, Volume 1. Mark Grand. John Wiley & Sons, September
1998. ISBN: 0471258393
► The Rational Unified Process, An Introduction. Philippe Kruchten. 2nd ed.
Addison-Wesley Publishing Company, March 2000. ISBN: 0201707101
Redpieces are Redbooks in progress; not all Redbooks become Redpieces and
sometimes just a few chapters will be published this way. The intent is to get the
information out much quicker than the formal publishing process allows.
Information in this book was developed in conjunction with use of the equipment
specified, and is limited in application to those specific hardware and software
products and levels.
IBM may have patents or pending patent applications covering subject matter in
this document. The furnishing of this document does not give you any license to
these patents. You can send license inquiries, in writing, to the IBM Director of
Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact IBM Corporation, Dept.
600A, Mail Drop 1329, Somers, NY 10589 USA.
The information contained in this document has not been submitted to any formal
IBM test and is distributed AS IS. The use of this information or the
implementation of any of these techniques is a customer responsibility and
depends on the customer's ability to evaluate and integrate them into the
customer's operational environment. While each item may have been reviewed
by IBM for accuracy in a specific situation, there is no guarantee that the same or
similar results will be obtained elsewhere. Customers attempting to adapt these
techniques to their own environments do so at their own risk.
Any pointers in this publication to external Web sites are provided for
convenience only and do not in any manner serve as an endorsement of these
Web sites.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States and/or other
countries.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States and/or other countries.
UNIX is a registered trademark in the United States and other countries licensed
exclusively through The Open Group.
SET, SET Secure Electronic Transaction, and the SET Logo are trademarks
owned by SET Secure Electronic Transaction LLC.
A installation 198
AAT 227, 327, 390 PiggyBank 200
application client module 412 run JUnit 552
EAR file 415 start and stop WebSphere 550
EJB module 404 Struts 289
start 390 test automation 548
user interface 393 using PiggyBank sample 562
Web module 394 WebSphere 197
access bean 26 Apache 70
ActionServlet 160, 164, 296 Software Foundation 158
actor 90, 92 applet 7
AE 66 Applet Designer 238
administrative console 448 application
application installation 450, 451 architect 86
DataSource 453 client creation wizard 415
Distributed Debugger 479 client module 412
JDBC driver 453 client resource configuration tool 458
OLT 479 components 10
resources 453 debugging 467
start 448 development tools 63
stop 448 installation 451
WSCP 462 logging 348, 360
XMLConfig 460 versions 371
AEs 66 Application Assembly Tool
administrative console 441 see AAT
application installation 442 automation 225, 383
browser-based console 438
configuration files 439
DataSource 446
B
bean-managed persistence
Distributed Debugger 482 see BMP
JDBC driver 445 BMP 36, 39
OLT 482 brainstorming 84
plugin configuration 456 breakpoint
resources 445 conditional 470
start 438 Distributed Debugger 492
stop 448 JSP execution monitor 515
AlphaWorks 340 OLT 489
analysis 84 VisualAge for Java 469
use cases 90 browser-hosted components 11
Ant 70 build target 205
automation 225 business
build files 199 analyst 86
EJBs 215 logic access beans 18, 26
C D
caching data structure JavaBeans 18, 23
command 109 database
JNDI 329 connection pooling 363
class path 187 DataCollection 173
dependent 409 DataElement 173
EJB module 405 DataField 173
CLASSPATH 187 DataSource 277
ClearCase 69, 249, 386 debugging
client external classes 473
Ant 220 JSP 511
JAR 194 scrapbook 475
client/server applications 5 variables 473
CMP 36 VisualAge for Java 468
fields 142 WebSphere Studio 508
persistence mapping 419, 420, 429 DebugSupport 470, 474
CNInitialContextFactory 326 default language 125
code deployed code 410
dependencies 188 deployment descriptor 410
generation 123 design 84
collection 268 user interface 115
command development
bean 28, 109 environment 179
caching 109 framework 283
granularity 108 roles 85
JavaBean 105 team 85
multi protocol 109 DHTML 11, 40
pattern 88, 105 display command 106
PiggyBank 119 distributed
Common Connector Framework 101 object server 7, 29
compiler options 189 object-based application 7
components Web enabled 8
application 10 Distributed Debugger 244, 476
browser 11 attach 496
Web application server 18 breakpoints 493
connection pooling 363 exceptions 502
container source code 492
transaction properties 407 standalone application 504
container-managed persistence start 490
see CMP user interface 491
control flow 40 variables and expressions 494
controller 87 Web application 499
cookies 47 WebSphere Studio 515
copy helper 36 document object model 12
CRC cards 84 DrAdmin 352
Index 575
internationalization 299 execute 533
Internet Inter-ORB Protocol installation 521
see IIOP run tests 547
ISO test case 522
language code 299 test failure 535
isolation level 369, 406 test suite 538
VisualAge for Java 521
J
J2EE 64, 181, 183 K
deployment descriptor 227 KeyedCollection 173, 322
Jakarta
Ant
see Ant
L
launchClient tool 464, 506, 546
Log4J
lexical analysis 84
see Log4J
Log4J 70, 182, 340, 354
Struts
logging 330
see Struts
Logging Toolkit for Java 340
JAR 65
client 194
EJB 190 M
Java macro 338
developer 86 meta-data 226
JavaBean 23 META-INF 185, 190, 420
command 105 method
javac 186, 189 bell 470
javadoc 186, 195, 262 destroy 19
JavaMail 445 doGet 19
JavaScript 11, 40, 300 doPost 19
JavaServer Pages ejbCreate 267
see JSP ejbLoad 39
JDBC 54, 55, 64 ejbStore 39
driver 278, 445 forward 42
JDK 64 getCallerIdentity 267
JMS 64 getCallerPrincipal 267
JNDI 30, 54, 268, 326 getParameter 46
namespaces 327 getProperty 21
JRas 182, 341 getRequestDispatcher 159
JSP 18, 21 getServletContext 43, 52
configuration 272 include 42
debugging 511 init 19
execution monitor 279, 514 initialize 306
performance 365 isCallerInRole 267
processor 276 isCallerinRole 267
specification 265 sendRedirect 42, 44
JTA 30, 279 service 19, 308
JUnit 70, 517 setAttribute 365
Ant 530 toString 334
EJB testing 540 mind-mapping 84
Index 577
modeling 89 access beans 27
resource configuration tool 458 session EJB 32
reverse engineering 125, 135 stateless
RMI-IIOP 266 access beans 27
role 107 session EJB 29
Rose 69 static content 11
code generation 134 string concatenation 364
default language 125 Struts 70
EJBs 139 configuration 292
installation 131 design 153
PiggyBank use cases 95 development 284
reverse engineering 128 Jakarta project 158
round tripping 124 PiggyBank 161
VisualAge for Java bridge 130 VisualAge for Java 284
round tripping 124 Studio
rowset 28 see WebSphere Studio
Runnable 103 stylesheet 285
RuntimeException 267 syntactic validation 13
S T
sample code 558 tag libraries 67, 244
SCM 182, 249, 262, 385 task
scrapbook 309, 468 command 105
SDK 183, 186 team
SEAppInstall 234, 254 development
SEAppInstall tool 418, 431, 433 Studio 243
serializable 369 VisualAge for Java 262
serialized deployment descriptors 428 test
servlet 18 case 525
controller 160 suite 522, 536
engine 275 Trace.Java 340
mapping 403
multiplexing 104
multi-threading 366
U
UDDI 64
performance 365
UML 69, 155
ServletContext 48
Unified Modeling Language
session
see UML
bean 108
unit testing 518
EJB 29, 32
use case 78
HTTP 50
analysis 90
synchronization 34
JAR 192
SessionSynchronization 268
PiggyBank 93
SingleThreadModel 99
proxy 113
SOAP 64
realization 98
Software Configuration Management
useBean 21
see SCM
user interface
SSL 374
design 115
stateful
UserTransaction 267
Index 579
580 WebSphere Version 4 Application Development Handbook
WebSphere Version 4
Application Development Handbook
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
Back cover ®
WebSphere Version 4
Application Development
Handbook
Complete guide This IBM Redbook provides detailed information on how to develop Web
applications for IBM WebSphere Application Server Version 4 using a variety of INTERNATIONAL
for WebSphere
application
application development tools. TECHNICAL
development The target audience for this book includes team leaders and developers, who are SUPPORT
setting up a new J2EE development project using WebSphere Application Server ORGANIZATION
and related tools. It also includes developers with experience of earlier versions of
How to make the the WebSphere products, who are looking to migrate to the Version 4 environment.
best use of This book is split into four parts, starting with an introduction, which is followed by
available tools parts presenting topics relating to the high-level development activities of analysis BUILDING TECHNICAL
and design, code, and unit test. A common theme running through all parts of the INFORMATION BASED ON
book is the use of tooling and automation to improve productivity and streamline PRACTICAL EXPERIENCE
Product experts the development process.
reveal their
In Part 1 we introduce the WebSphere programming model, the application
secrets development tools, and the example application we use in our discussions. IBM Redbooks are developed by
the IBM International Technical
In Part 2 we cover the analysis and design process, from requirements modeling Support Organization. Experts
through object modeling and code generation to the usage of frameworks. from IBM, Customers and
In Part 3 we cover coding and building an application using the Java 2 Software Partners from around the world
Development Kit, WebSphere Studio Version 4, and VisualAge for Java Version 4. create timely technical
We touch on Software Configuration Management using Rational ClearCase and information based on realistic
provide coding guidelines for WebSphere applications. We also cover coding using scenarios. Specific
frameworks, such as Jakarta Struts and WebSphere Business Components. recommendations are provided
to help you implement IT
In Part 4 we cover application testing from simple unit testing through application solutions more effectively in
assembly and deployment to debugging and tracing. We also investigate how unit your environment.
testing can be automated using JUnit.
In our examples we often refer to the PiggyBank application. This is a very simple
J2EE application we created to help illustrate the use of the tools, concepts and
principles we describe throughout the book. For more information:
ibm.com/redbooks