C 3472371
C 3472371
SC34-7237-01
Note
Before using this information and the product it supports, read the information in “Notices” on page 661.
This edition applies to version 7, release 5, modification 1 of WebSphere Enterprise Service Bus (product number
5724-I82 ) and to all subsequent releases and modifications until otherwise indicated in new editions.
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright IBM Corporation 2007, 2011.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
PDF books and the information center
PDF books are provided as a convenience for printing and offline reading. For the
latest information, see the online information center.
As a set, the PDF books contain the same content as the information center.
The PDF documentation is available within a quarter after a major release of the
information center, such as Version 6.0 or Version 6.1.
The PDF documentation is updated less frequently than the information center, but
more frequently than the Redbooks®. In general, PDF books are updated when
enough changes are accumulated for the book.
Links to topics outside a PDF book go to the information center on the Web. Links
to targets outside a PDF book are marked by icons that indicate whether the target
is a PDF book or a Web page.
Table 1. Icons that prefix links to topics outside this book
Icon Description
A link to a Web page, including a page in the information center.
Links to the information center go through an indirection routing service, so that they continue to work
even if target topic is moved to a new location.
If you want to find a linked page in a local information center, you can search for the link title.
Alternatively, you can search for the topic id. If the search results in several topics for different product
variants, you can use the search result Group by controls to identify the topic instance that you want to
view. For example:
1. Copy the link URL; for example, right-click the link then select Copy link location. For example:
http://www14.software.ibm.com/webapp/wsbroker/redirect?version=wbpm620&product=wesb-dist
&topic=tins_apply_service
2. Copy the topic id after &topic=. For example: tins_apply_service
3. In the search field of your local information center, paste the topic id. If you have the documentation
feature installed locally, the search result will list the topic. For example:
1 result(s) found for
Installing fix packs and refresh packs with the Update Installer
Chapter 1. Accessibility features for Chapter 4. Import and export bindings 219
WebSphere ESB . . . . . . . . . . . 1 Messaging bindings . . . . . . . . . . . 219
Dynamic invocation . . . . . . . . . . . 220
Chapter 2. Programming information . . 3 Dynamic invocation by overriding a static
Generated API and SPI documentation . . . . . 3 endpoint . . . . . . . . . . . . . . 221
Java to XML conversion . . . . . . . . . . 3 Dynamic invocation with a target import . . . 261
Support for weak types . . . . . . . . . . . 5 Pure dynamic invocation . . . . . . . . 263
XMS programming reference . . . . . . . . . 8 Processing dynamic invocation details . . . . 264
Overriding the JMSReplyTo destination in a JMS
Chapter 3. Mediation primitives. . . . . 9 import. . . . . . . . . . . . . . . . 265
JMS data bindings . . . . . . . . . . . . 266
Service message objects . . . . . . . . . . 12
Summary of how to create a custom JMS data
SMO structure . . . . . . . . . . . . 16
binding . . . . . . . . . . . . . . 266
SMO headers . . . . . . . . . . . . . 29
Example custom binding for a JMS MapMessage 268
SMO context . . . . . . . . . . . . . 33
The WebSphere Transformation Extender data
SMO body. . . . . . . . . . . . . . 36
handler . . . . . . . . . . . . . . . 271
SMO attachments . . . . . . . . . . . 39
WebSphere Transformation Extender maps and
XML representation of SMO . . . . . . . . 40
the data handler . . . . . . . . . . . 273
Promoted and dynamic properties . . . . . . . 42
Setting the data binding descriptor . . . . . 275
Promoted properties table . . . . . . . . 43
Dynamic properties and mediation policies . . . 48
Mediation subflows. . . . . . . . . . . . 50 Chapter 5. Commands and scripts 283
List of mediation primitives . . . . . . . . . 51 Syntax diagram conventions . . . . . . . . 283
Business Object Map mediation primitive . . . 52 Administrative console actions with command
Custom Mediation primitive. . . . . . . . 56 assistance. . . . . . . . . . . . . . . 285
Data Handler mediation primitive . . . . . . 64 Profile commands in a multi-profile environment 287
Database Lookup mediation primitive . . . . 67 Command-line utilities . . . . . . . . . . 287
Endpoint Lookup mediation primitive . . . . 71 BPMCreateDatabaseUpgradeUtilities
Event Emitter mediation primitive. . . . . . 83 command-line utility . . . . . . . . . . 287
Fail mediation primitive . . . . . . . . . 90 BPMCreateRemoteMigrationUtilities
Fan In mediation primitive . . . . . . . . 92 command-line utility . . . . . . . . . . 288
Fan Out mediation primitive . . . . . . . 97 BPMCreateTargetProfile command-line utility 289
Flow Order mediation primitive . . . . . . 101 BPMGenerateUpgradeSchemaScripts
Gateway Endpoint Lookup mediation primitive 103 command-line utility . . . . . . . . . . 290
HTTP Header Setter mediation primitive . . . 110 BPMMigrate command-line utility . . . . . 291
JMS Header Setter mediation primitive . . . . 112 BPMMigrateCluster command-line utility . . . 292
Message Element Setter mediation primitive . . 115 BPMMigrateProfile command-line utility . . . 293
Message Filter mediation primitive . . . . . 119 BPMMigrationStatus command-line utility. . . 298
Message Logger mediation primitive . . . . 122 BPMQueryDeploymentConfiguration
Message Validator mediation primitive . . . . 133 command-line utility . . . . . . . . . . 299
MQ Header Setter mediation primitive . . . . 134 BPMSnapshotSourceProfile command-line utility 300
Policy Resolution mediation primitive . . . . 139 esAdmin command-line utility . . . . . . 302
Service Invoke mediation primitive . . . . . 159 eventbucket command-line utility . . . . . 304
Set Message Type mediation primitive . . . . 185 eventpurge command-line utility . . . . . . 305
SLA Check mediation primitive . . . . . . 188 genMapper command-line utility . . . . . . 306
SLA Endpoint Lookup mediation primitive . . 190 installver_bpm command-line utility . . . . 307
SOAP Header Setter mediation primitive . . . 194 manageprofiles command-line utility . . . . 323
Stop mediation primitive . . . . . . . . 198 migrateBSpaceData command-line utility . . . 353
Subflow mediation primitive . . . . . . . 198 serviceDeploy command-line utility . . . . . 355
Contents vii
viii IBM WebSphere ESB: Reference
Figures
1. Overview of SMO structure . . . . . . . 15 28. Illustration of endpoint override by dynamic
2. Mediation flow path split . . . . . . . . 36 invocation, with unwired import . . . . . 230
3. Aggregating data using Fan Out, XSLT, Service 29. Illustration of endpoint override by dynamic
Invoke and Fan In . . . . . . . . . . 95 invocation, with wired import . . . . . . 232
4. Aggregating data using Fan Out, XSLT, Service 30. Illustration of endpoint override by dynamic
Invoke and Fan In . . . . . . . . . . 99 invocation, with unwired import . . . . . 234
5. Overview of a proxy gateway request 105 31. Illustration of endpoint override by dynamic
6. Mutually exclusive gate conditions . . . . 150 invocation, with wired import . . . . . . 236
7. Distributing module properties to avoid 32. Illustration of endpoint override by dynamic
conflicts . . . . . . . . . . . . . 152 invocation, with unwired import . . . . . 237
8. Example registry conditions. . . . . . . 158 33. Illustration of endpoint override by dynamic
9. Message propagation in default mode 160 invocation, with wired import . . . . . . 239
10. Message propagation in Message Enrichment 34. Illustration of endpoint override by dynamic
mode . . . . . . . . . . . . . . 162 invocation, with unwired import . . . . . 240
11. SMO propagation in default mode . . . . 171 35. Illustration of endpoint override by dynamic
12. SMO propagation in Message Enrichment invocation using SMO, with wired import . . 243
mode with XPath only configured . . . . 172 36. Illustration of endpoint override by dynamic
13. SMO propagation in Message Enrichment invocation using SMO, with an unwired
mode with XPath, and request and response import . . . . . . . . . . . . . . 244
header propagation configured. . . . . . 173 37. Illustration of endpoint override by dynamic
14. SMO propagation in Message Enrichment invocation using SMO, with wired import . . 246
mode with XPath and request header 38. Illustration of endpoint override by dynamic
propagation configured . . . . . . . . 174 invocation using SMO, with an unwired
15. SMO propagation in Message Enrichment import . . . . . . . . . . . . . . 247
mode with XPath and response header 39. Illustration of endpoint override by dynamic
propagation configured . . . . . . . . 175 invocation using SMO, with wired import . . 249
16. The Service Invoke mediation primitive 40. Illustration of endpoint override by dynamic
acting as a proxy to an external service . . . 175 invocation using SMO, with an unwired
17. The Service Invoke mediation primitive import . . . . . . . . . . . . . . 251
retrying alternate services . . . . . . . 177 41. Illustration of endpoint override by dynamic
18. The Service Invoke mediation primitive invocation using SMO, with wired import . . 253
retrying an alternate service. . . . . . . 178 42. Illustration of endpoint override by dynamic
19. The Service Invoke mediation primitive invocation using SMO, with unwired import . 254
augmenting an input message . . . . . . 179 43. Illustration of endpoint override by dynamic
20. The Service Invoke mediation primitive invocation using SMO, with unwired import . 256
aggregating service responses . . . . . . 179 44. Illustration of endpoint override by dynamic
21. Using the Service Invoke mediation primitive invocation using SMO, with wired import . . 257
for parallel processing . . . . . . . . 182 45. Illustration of endpoint override by dynamic
22. How Service Invoke terminals map to callout invocation using SMO, with an unwired
terminals . . . . . . . . . . . . . 184 import . . . . . . . . . . . . . . 258
23. Dynamic invocation by overriding a static 46. Illustration of response redirection using SMO 260
endpoint . . . . . . . . . . . . . 222 47. Dynamic invocation with a target import 262
24. Dynamic invocation by overriding a static 48. An export configured to use the WTX data
endpoint using SMO . . . . . . . . . 223 handler . . . . . . . . . . . . . 272
25. Illustration of endpoint override by dynamic 49. An import configured to use the WTX data
invocation, with wired import . . . . . . 224 handler . . . . . . . . . . . . . 272
26. Illustration of endpoint override by dynamic 50. Example showing one mediation module
invocation, with unwired import . . . . . 226 interacting with another mediation module . 477
27. Illustration of endpoint override by dynamic 51. A guided activity . . . . . . . . . . 478
invocation, with wired import . . . . . . 229
Accessibility features
The following list includes the major accessibility features in WebSphere® ESB. The
accessibility features include the following functions:
v Keyboard-only operation, except in Business Space powered by WebSphere.
v Interfaces that are commonly used by screen readers.
Operating system features that support accessibility are available when you are
using WebSphere ESB.
Keyboard navigation
(For information about supported Web browsers, see the WebSphere Enterprise
Service Bus System Requirements at http://www.ibm.com/software/integration/
wsesb/sysreqs/.)
Interface information
v Installation
You can install WebSphere ESB either in graphical or silent form. The silent
installation program is recommended for users with accessibility needs.
For instructions, see Installing the product silently.
v Administration
The administrative console is the primary interface for interacting with the
product. This console is displayed within a standard Web browser. By using an
accessible Web browser, such as Microsoft Internet Explorer, administrators are
able to:
– Use screen-reader software and a digital speech synthesizer to hear what is
displayed on the screen
– Use voice recognition software, such as IBM ViaVoice®, to enter data and to
navigate the user interface
– Operate features by using the keyboard instead of the mouse
You can configure and administer product features by using standard text editors
and scripted or command-line interfaces instead of the graphical interfaces that are
provided.
This product includes certain third-party software not covered under the IBM
license agreement. IBM makes no representation about status of these products
regarding the Section 508 of the U.S. Federal Rehabilitation. Contact the vendor for
information about the Section 508 status of its products. You can request a U.S.
Section 508 Voluntary Product Accessibility Template (VPAT) on the IBM Product
accessibility information Web page at www.ibm.com/able/product_accessibility.
See the IBM Accessibility Center for more information about the commitment that
IBM has to accessibility.
The generated API and SPI documentation for Process Server and WebSphere
Enterprise Service Bus are provided in the WebSphere Business Process
Management information center, under Reference > Generated API and SPI
documentation, at http://www14.software.ibm.com/webapp/wsbroker/
redirect?version=wbpm620&product=wesb-dist&topic=welc_ref_javadoc.
The table of contents for the API and SPI documentation presents a package list
that is organized by package name, such as com.ibm.websphere.sca.
Alternatively, you can use the search to find a package or class by its name, such
as com.ibm.websphere.sca or ServiceManager.
If you want to use the WebSphere MQ classes to create custom MQ data bindings,
see the WebSphere Integration Developer topic Example of custom MQ data bindings.
JMS and MQ data bindings details are described in the WebSphere Integration
Developer topic JMS, MQ JMS and generic JMS data bindings.
If you want to create a user-defined JMS data binding, see the WebSphere
Integration Developer topic JMS data bindings.
If you want to use WSRR classes, see the API Reference in the WebSphere Service
Registry and Repository V6.2 Information Center.
For reference information about the supported APIs for WebSphere Application
Server, see the Reference section of the WebSphere Application Server information
center. For example, Reference > APIs - Application Programming Interfaces.
Table 2 shows how the system converts between Java types and XML. The system
places the XML in the types section of a Web Services Description Language
(WSDL) document.
Table 2. Conversion between Java and XML
Java type XML type
char or java.lang.Character <ati:schema
targetNamespace="http://xml.apache.org/xml-soap"
xmlns="http://www.w3.org/2001/XMLSchema"
xmlns:ati="http://www.w3.org/2001/XMLSchema">
<ati:simpleType name="char">
<ati:restriction base="xsd:string">
<ati:length value="1"/>
</ati:restriction>
</ati:simpleType>
</ati:schema>
java.util.Map targetNamespace="http://xml.apache.org/xml-soap"
java.util.HashMap <complexType name="Item">
<all>
<element name="key" type="xsd:anyType"/>
<element name="value" type="xsd:anyType"/>
</all>
</complexType>
<complexType name="Map">
<sequence>
<element maxOccurs="unbounded" minOccurs="0"
name="item" type="tns2:Item"/>
</sequence>
</complexType>
java.util.Vector targetNamespace="http://xml.apache.org/xml-soap"
<complexType name="Vector">
<sequence>
<element maxOccurs="unbounded" minOccurs="0"
name="item" type="xsd:anyType"/>
</sequence>
</complexType>
java.util.types xsd:anytype
Table 3 on page 6 shows how business object mapping supports weak types.
XML mapper
Introduction
Mediation primitives are the building blocks of mediation flows. You create
mediation flows in mediation flow components, and mediation flow components
can exist in either business modules or mediation modules.
Mediation flows operate on messages that are in-flight between service requesters
(service consumers) and service providers, and each mediation primitive lets you
do different things with a message. For example, you can route a message or
change its content. Mediation primitives process messages as service message objects
(SMOs) because SMOs allow different types of messages to be processed in a
common way.
All mediation primitives have an input terminal (called in) that can be wired to
accept a message. Most mediation primitives have one fail terminal (called fail)
and one or more output terminals. However, the Stop and Fail mediation
primitives have no fail terminal and no output terminals.
If an exception occurs during the processing of the input message, the fail terminal
propagates the original message, together with any exception information. If an
exception occurs before mediating the input message, when setting the properties
of a mediation primitive, an IllegalArgumentException is thrown and the original
message is not propagated.
Mediation flows
A service requester uses a specific interface to invoke a mediation and, in its role
as an intermediary, the mediation uses another interface to invoke the service
provider. The interface used by the service requester is referred to as the source
interface, and the interface provided by the service provider is referred to as the
target interface. Depending on the service requester and the service provider, the
source and target interfaces might be the same or different.
For each source operation, the mediation can contain a request flow. For each
operation that can have a response, the mediation can also have a response flow.
Each operation can also have an error flow.
A request flow processes a request message from a service client. The flow begins
with a single input node for the source operation, followed by one or more
mediation primitives wired together, in sequence. Each target operation has a
callout node, and the request flow can be wired to the callout node to call a
particular operation. If a message is to be returned to the source directly after
processing, the request flow can be wired to an input response node in the request
flow. If fault messages are defined in the source operation, an input fault node is
A response flow processes responses returned from the service provider. The flow
begins with a callout response node for each target operation, followed by one or
more mediation primitives wired together, in sequence. The response flow contains
a single input response node representing the source operation. Wiring the
response flow to the input response node causes a response to be sent to the
service invoker. If fault messages are defined in the target operation, a callout fault
node is also created in the response flow. The callout fault node allows fault
messages that are returned by the target operation, to be processed. Errors that are
returned by the operation but are not defined as fault messages, are propagated to
the fail terminal of the callout response node.
An error flow processes messages that are propagated to the fail terminal of a
mediation primitive in the request or response flow, or the fail terminal of a
callout response node, that is not wired to another primitive or node. The flow
begins with a single input node, followed by one or more mediation primitives
wired together, in sequence. The error flow for a request-response operation
contains a single input response node and, if fault messages are defined in the
operation, the error flow contains an input fault node. An error flow can be used
to complete actions when an unexpected error occurs in the mediation flow; for
example, logging the message using the Message Logger mediation primitive. An
error flow can also be used to return a response message or modeled fault, rather
than the unmodeled fault that would be returned if the error flow were not
implemented. For more information on error handling, see the Error handling in
the mediation flow topic in the IBM Integration Designer Information Center.
Promoted properties
Mediation primitives have properties that can be used to customize their behavior.
Some of these properties can be made visible to the runtime administrator by
promoting them. Certain properties lend themselves to being administratively
configured. Other properties are not suitable for administrative configuration,
typically because modifying them affects the mediation flow in such a way that
you need to redeploy the mediation module. IBM Integration Designer lists the
properties that you can choose to promote under the promoted properties of a
mediation primitive.
Note: The callout node also has properties that can be promoted.
Promoted properties are given an alias name, and you can set the alias name so
that it is meaningful in the context of a particular mediation. The alias name is the
property name that is displayed on the runtime administrative console; multiple
promoted properties can be given the same alias name if they are of the same type.
Therefore, what appears as a single property, in the administrative console, can set
the same value in multiple mediation primitives. You can set the value of a
promoted property, from IBM Integration Designer and from the runtime
administrative console.
Property groups
Dynamic properties
Any property that you promote is also a dynamic property, so long as the property
is in the top-level request, response, or fault flow. If your mediation flow contains
a Policy Resolution mediation primitive, then you can override a dynamic property
(in another mediation primitive) using a mediation policy file. Although you can
override promoted properties dynamically you must always specify a valid default
value in either the mediation primitive or the callout node.
Any mediation primitive that promotes a property is allowing the property value
to be dynamically set, and the mediation primitive is said to be dynamically
configurable.
Mediation policies
If you want to use mediation policies to control your mediation flows, you must
include the Policy Resolution mediation primitive in your mediation flow
component. If you want to associate mediation policies with a target service, rather
than a module, you should add an Endpoint Lookup primitive before the Policy
Resolution primitive.
After exporting the EAR file containing your Policy Resolution primitive, you must
import it into WebSphere Service Registry and Repository (WSRR). This adds your
module, and default mediation policies, to the registry. If you want to associate
mediation policies with a target service, you must also load the WSDL for the
target service, into WSRR. After loading your documents into WSRR, you must
attach a suitable mediation policy to either your module or to your target service,
or both.
At run time, the Policy Resolution mediation primitive queries your registry, and
uses any suitable mediation policy information to override dynamic properties that
come later in the flow.
XPath
Many mediation primitives have a property called Root that contains an XPath 1.0
expression. You can use this XPath expression to specify a subset of the message
for the mediation primitive to operate on. Depending on the mediation primitive,
you can specify: /, /body, /headers, or your own XPath expression. / refers to the
complete SMO, /body refers to the body section of the SMO, /headers refers to the
headers of the SMO. If you specify your own XPath expression, the part of the
SMO you specify is processed.
IBM Integration Designer displays the structure of a message and allows you to
select locations within the message. In this way you can navigate the structure of a
message and create XPath expressions.
You can route messages to an endpoint that is decided at run time. The endpoint
that the run time uses is located in the SMO header at /headers/SMOHeader/
Target/address. You can set this part of the SMO manually, using various
mediation primitives, but the Endpoint Lookup mediation primitive can set this
location automatically.
In order for the run time to implement dynamic routing on a request, you must set
the Use dynamic endpoint if set in the message header property in the callout
node or Service Invoke mediation primitive.
You can specify a default endpoint that the run time uses if it cannot find a
dynamic endpoint. You specify a default endpoint by wiring an import to a
reference.
Dynamic endpoint support does not require you to wire a reference to an import.
However, if you want to provide default configuration settings for the dynamic
endpoint, you can use a wired import. After a reference is wired to an import, the
configuration settings of the import apply to all dynamic endpoints using that
reference.
Attachments
You can receive and send SOAP messages that have attachments of various sorts,
such as images. You might want to receive SOAP messages with attachments and
let the attachments pass through unchanged, or you might want to create new
attachments, perhaps from information in the message.
Exceptions
Mediation primitives process messages as SMOs. SMOs are enhanced Service Data
Objects (SDOs), and the SMO model is a pattern for using SDO DataObjects to
represent messages. The SMO contains a representation of the following groups of
data:
v Header information associated with the message. For example, Java Message
Service (JMS) headers if a message has been conveyed using the JMS API, or
MQ headers if the messages has come from WebSphere MQ.
v The body of the message: the message payload. The message payload is the
application data exchanged between service endpoints.
v Message attachments.
v Context information (data other than the message payload).
SMO content
All SMOs have the same basic structure. The structure consists of a root data object
called a ServiceMessageObject, which contains other data objects representing the
header, body, attachments, and context data. The precise structure of the headers,
body, and context depends on how you define the mediation flow at integration
development. The mediation flow is used at runtime to mediate between services.
The SMO headers contain information that originates from a specific export or
import binding (a binding specifies the message format and protocol details).
Messages can come from a number of sources, so the SMO has to be able carry
different kinds of message header. The kinds of message headers handled are:
v Web services message headers.
v Service Component Architecture (SCA) message headers.
v Java Message Service (JMS) message headers.
v WebSphere MQ message headers.
v WebSphere Adapters message headers.
Typically, the structure of the SMO body, which holds the application data, is
determined by the Web Services Description Language (WSDL) message that you
specify when you configure a mediation flow.
SMO context objects are either user-defined or system-defined. You can use
user-defined context objects to store a property that mediation primitives can use
later in the flow. You define the structure of a user-defined context object in a
business object, and use the business object in the input node of the request flow.
The correlation context, transient context and shared context are user-defined
context objects.
0..1
transient
0..1
failInfo
0..1
primitiveContext
context
0..1
1
shared
0..1
dynamicProperty
0..1
userContext
0..1
ServiceMessageObject
SMOHeader
headers 0..1
1 JMSHeader
0..1
SOAPHeader
0..
*
SOAPFaultinfo
0..1
properties
0..
*
body MQHeader
0..1 0..1
HTTPHeader
0..1
EISHeader
0..1
attachments
0.. contentID
* 1
contentType
data
0..1
bodyPath
0..1
Figure 1. Overview of SMO structure. The context, headers, body and attachments of a ServiceMessageObject
The SMO provides an interface to access and modify message headers, message
payloads, message attachments, and message context.
The runtime operates on messages that are in flight between interaction endpoints.
The runtime creates SMO objects, which a mediation flow uses to process a
message.
When you create mediation flows, IBM Integration Designer specifies the type of
message body for each terminal (input, output or fail) and, optionally, the type of
context information. The runtime uses this information to convert messages into
SMO objects of the specified type.
SMO structure
The service message object (SMO) structure starts with a root data object called
ServiceMessageObject, which contains other data objects representing header, body,
attachments, and context data.
Introduction
A schema declaration specifies the overall structure of the SMO. The schema is
generated by IBM Integration Designer tools. The following schema declaration
shows you the SMO elements that can be generated by IBM Integration Designer.
Introduction
The HTTP schema specifies the overall structure of the HTTP header.
A SMO representing a request message has a body that corresponds with the
operation input, and a SMO representing a response message has a body that
corresponds with the operation output. The SMO body has child elements that
correspond with the parts belonging to the WSDL message. The name and type of
each element that become a child of the body element is as follows:
v The WSDL message has a single message part that is defined by an element:
The SMO body has a single child element with a name the same as the part
element name, and the type is the same as the part element type. This applies to
WSDL operations that follow the document literal wrapped style.
The SMO body for the following WSDL message example has one child element
named operation1, with a type the same as the type of the operation1 element
that is declared in or referred to from the WSDL:
<wsdl:message name="operation1RequestMsg">
<wsdl:part element="tns:operation1" name="operation1Parameters"/>
</wsdl:message>
SMO headers
The service message object (SMO) headers carry header information for different
types of messages.
The SMO headers contain information that originates from a specific import or
export binding (a binding specifies the message format and protocol details).
Messages can come from a number of sources, so the SMO has to be able carry
different kinds of message header. The kinds of message headers handled include:
v SOAP.
v HTTP.
v Java Message Service (JMS).
v WebSphere MQ.
Introduction
XPath examples
The following XPath accesses a custom JMS property in the SMO header:
/headers/properties[name=’MyProperty’]
Messages received from a Web services import or export might include SOAP
headers from the original SOAP message. The SOAP headers are placed in the
/headers/SOAPHeader element in the headers section of the service message object
(SMO).
A SOAPHeader element is a wrapper that contains the original SOAP header as its
value element. The element can have zero or more occurrences. The XML
namespace-qualified name of the SOAP header type appears in the name and
nameSpace elements of the SOAPHeader .
In a Custom Mediation primitive, you can use the SDO API to access SOAP header
content.
v To access all SOAP headers present in the SMO:
import commonj.sdo.DataObject;
import java.util.List;
SOAP headers appear in the SMO in a request or response flow if all the following
conditions are true:
v The message has been received from a Web service import or export.
v The import or export has been configured to propagate protocol headers.
v The sending application included SOAP headers when creating the original
request.
Introduction
The md element contains all the fields from the MQMD definition (see the
WebSphere MQ documentation), except for certain control fields that carry no
useful data (such as StrucId and Version) and message format fields (Encoding,
CodedCharSetId and Format).
The control element carries the Encoding, CodedCharSetId and Format fields, which
describe the message body. If the WebSphere MQ message contains any message
headers (for example, MQRFH2), the Encoding, CodedCharSetId and Format fields
that describe the header are carried in the header element.
The header element contains Encoding, CodedCharSetId and Format fields, which
describe the header. The Format field, in particular, must be set correctly; for
example, MQHRF2 for an MQRFH2 header. In addition, the CodedCharSetId and
Encoding fields are important for opaque data. When rendered as a WebSphere MQ
message, this format information is written into the previous MQ header (or into
the MQMD if there is no previous header).
Precisely one of these four sub elements must be set: it is an error to have more
than one of these set in any header element. The value subelement stores the
structure used by the user-supplied header data binding; the other three elements
(rfh, rfh2 and opaque) are described in the following sections.
If you use a mediation module to invoke an import with a native MQ binding, and
you create the MQ header by setting the MQ header fields in the SMO, you must
ensure that the format field of the header is set. Otherwise, a
NullPointerException is thrown at runtime. You can set the format field of the MQ
header by using one of the supplied mediation primitives. For example, the XSLT
primitive or the Message Element Setter primitive. The IBM WebSphere MQ
Information Center documents the values supported for the format field. For
example, MQHRF2 for an MQRFH2 header.
RFH headers
RFH2 headers
A WebSphere MQ RFH2 header contains zero or more named folders, each of which
contains a sequence of properties and groups. A property has a name, optional
type and value (all represented as string). A group has a name and itself contains
a sequence of properties and groups. The SMO representation of an RFH2 header
also contains a NameValueCCSID element, which determines the CCSID used to
encode the folders in the WebSphere MQ message.
MQCIH headers
The WebSphere MQ header fields are defined using the same set of types used by
WebSphere MQ itself. MQLONG fields are represented as int; MQBYTEnn fields as
hexBinary data limited to nn in length; and MQCHARnn fields as string data limited
to nn characters in length.
Introduction
You can access and update the content in the WebSphere MQ header structures
using mediation primitives. Most mediation primitives let you navigate messages
using XPath expressions. If you do this, you must make sure that the XPath
identifies the particular header that the primitive is interested in. If you implement
your own custom mediation primitive, you can access message information from
Java code.
XPath examples
The following XPath accesses a property in the <usr> folder of an RFH2 header:
/headers/MQHeader/header/rfh2/folder[name="usr"]/property[name="prop"]/value
The following XPath identifies the header by index rather than by format:
/headers/MQHeader/header[2]/value/mydata
SMO context
The service message object (SMO) context lets mediation primitives pass data that
is not part of the message payload, between themselves.
SMO context objects are either user-defined or system-defined. You define the
structure of user-defined context objects, and IBM Integration Designer defines the
structure of system-defined context objects.
You can get or set the properties of user-defined context objects, from most
mediation primitives. For example, you could:
v Set a property value using a Database Lookup mediation primitive.
v Map between a context object and the message body, using an XSL
Transformation mediation primitive.
v Create a Custom Mediation primitive to get or set a property value.
You can use an XPath 1.0 expression to access the property of a user-defined
context object, from most mediation primitives. For example, to access the transient
property oldAddress, use this XPath expression:
/context/transient/oldAddress
Correlation context
The correlation context is used when mediation primitives want to pass values
from the request flow to the response flow. You can use the correlation context to
link a specific request message with its response.
Transient context
The transient context is used for passing values between mediation primitives in
the current flow: either the request flow or the response flow. The transient context
cannot link requests and responses.
Shared context
The shared context is a storage area you can use if you want to aggregate data:
there is only one shared context per thread, per flow. Generally, there is one thread
for the request flow and one thread for the response flow. Therefore, the request
If you use a Service Invoke mediation primitive outside of a Fan Out and Fan In
aggregation sequence, and you use an invocation style of asynchronous with
callback, then the shared context is empty after the service invocation.
After you have defined the shared context, you can use it to store data during
aggregation operations. You need to design the shared context business object
carefully, to make sure that it is suitable for all aggregation scenarios within a
specific flow. The content of the shared context business object does not persist
across a request and response flow through callout invocation: the shared context
content is only available within the scope of a single request or response flow.
In summary:
v The shared context is a thread-based storage area. Generally, one request flow
has one thread, even if the path splits. Therefore, generally, one request flow
shares the same shared context.
v The content of the shared context business object does not persist across a
request and response flow, through callout invocation. Whatever data is in the
shared context of the request flow cannot be reused during the response flow.
v The shared context can be used to aggregate data when using the Fan Out and
Fan In mediation primitives: it is not intended for general data storage during a
flow. The correlation context and transient context are available for general data
storage.
v After a Service Invoke call, the shared context is empty under the following
conditions:
– The Service Invoke mediation primitive is configured for an asynchronous call
with callback.
– The Service Invoke mediation primitive is used outside of a Fan Out and Fan
In aggregation sequence.
userContext
Generally, you might use a userContext object if you had a module that contained
different types of SCA components, and you wanted to pass data between the
components.
For example, you could populate a userContext from a Java component, using the
context service API. You could then access the userContext data from a mediation
flow component, using the SMO.
You can control whether a userContext object is created by using a qualifier on the
export.
There can be multiple copies of an SMO within one mediation flow. Each instance
of an SMO has its own correlation context and transient context. Therefore, if there
are multiple copies of an SMO within one flow, there are multiple versions of the
correlation context and transient context.
However, there is only one shared context per thread, per flow. Therefore, you can
have multiple instances of an SMO, but they can all use the same shared context.
For example, a request flow path might split and rejoin, but the different paths
would all have access to the same shared context.
Generally, if a mediation flow splits, the different paths are all done under the
same thread and all have the same shared context. However, each path has its own
copy of the SMO, and each SMO copy has its own version of the correlation
context and transient context. For example, the following diagram shows the Fan
Out mediation primitive splitting the mediation flow path, and the Fan In
mediation primitive bringing the path together again.
SMO body
The body of a service message object (SMO) is defined by reference to a Web
Services Description Language (WSDL) message.
For each part defined by the WSDL message there is one element under the SMO
body. The contents of the SMO elements are of the structure of the WSDL part
definition. The element names depend upon the kind of WSDL message that the
SMO is defined from.
The document literal wrapped style of WSDL produces SMO bodies that contains a
single element. This WSDL style is commonly used by Web service designers, and
is generated by the interface editor ofIBM Integration Designer.
If the WSDL message has a single part, typed in the WSDL definition by a global
element, then the SMO body contains a single element. The single element is
named after the WSDL global element.
Note: The single element is not qualified by a namespace. The namespace of the
global element is ignored.
This example WSDL could produce an SMO body similar to the following.
<body xsi:type="tns:operation1RequestMsg">
<operation1>
<name>Bob</name>
<age>35</age>
</operation1>
</body>
If the WSDL message has many parts then the name of each element in the SMO
body, is the same as the corresponding WSDL part. If the WSDL message has a
single part described by an XSD type, then the name of each element in the SMO
body is also the same as the corresponding WSDL part.
<wsdl:message name="operation2RequestMsg">
<wsdl:part name="name" type="xsd:string"/>
<wsdl:part name="age" type="xsd:int"/>
</wsdl:message>
<wsdl:message name="operation2ResponseMsg">
<wsdl:part name="surname" type="xsd:string"/>
<wsdl:part name="height" type="xsd:float"/>
</wsdl:message>
<wsdl:portType name="NonDLWInterface">
<wsdl:operation name="operation2">
<wsdl:input message="tns:operation2RequestMsg" name="operation2Request"/>
<wsdl:output message="tns:operation2ResponseMsg" name="operation2Response"/>
</wsdl:operation>
</wsdl:portType>
</wsdl:definitions>
This example WSDL could produce an SMO body similar to the following.
<body xsi:type="tns:operation2RequestMsg">
<name>Bob</name>
<age>35</age>
</body>
If you create an interface using the interface editor of IBM Integration Designer,
then the resulting WSDL will be of the document literal wrapped style. However,
you can view and edit WSDL files that are not of the document literal wrapped
style, using the interface editor. In some cases, the interface editor might not
SMO attachments
The service message object (SMO) contains an attachments element for each
attachment associated with a SOAP message.
Introduction
The SMO attachments elements let you send and receive SOAP messages that have
attachments of various types.
You might want to send SOAP messages with attachments and let the attachments
pass through the mediation flow unchanged, or you might want to create new
attachments, perhaps from information in the message or from an external source.
Details
A SOAP/HTTP message with attachments consists of a MIME multipart message
in which the SOAP body is the first part and the attachments are subsequent parts
(as defined in the SOAP Messages with Attachments specification and in the SOAP
Message Transmission Optimization Mechanism (MTOM) specification).
When you are sending SOAP messages with attachments, the root element you
choose determines how attachments are propagated.
v If you use “/body” as the root of the XML map, all attachments are propagated
across the map by default.
v If you use “/” as the root of the map, you can control the propagation of
attachments.
Introduction
The Message Logger and XSLT mediation primitives use an XML serialization of
the SMO.
The Message Logger mediation primitive logs an XML serialization of the SMO.
The XSLT mediation primitive transforms messages using an XSLT 1.0
transformation. The transformation operates on an XML serialization of the SMO.
If you need to understand the data logged by the Message Logger, or write XSL
transformations that operate on an XML serialization, then you should know how
the SMO is represented as XML.
Typically, an SMO has header data, context information, and a body containing the
message payload. The message payload is the application data exchanged between
service endpoints. The header data is of a fixed structure; the structure of the
context data is partially fixed and partially defined by the flow designer. The
structure of the body is defined by reference to a WSDL-defined message.
The Message Logger and XSLT mediation primitives allow the root of the
serialization to be specified, and the root element of the XML document reflects
this root.
Typically, the root element is named after the selected root in the SMO structure
and is in the default namespace. However, if the entire SMO is serialized, by
chosing / as the root in the mediation primitive, then the root element is named
smo and is in the SMO namespace.
Body structure
For each part defined by the WSDL message there is one element underneath the
body. Each element is named as follows:
v If the WSDL message has a single part, typed in the WSDL definition by a
global element, then the body contains a single element named after the global
element. This case includes the document literal wrapped style of WSDL
definition. This style is commonly used by web service designers, and is
generated by the interface editor of IBM Integration Designer.
Note: The element is in the default namespace, and not in the namespace of the
global element.
v If the WSDL message has many parts, or if it has a single part described by an
XSD type, then the name of each element is the same as the corresponding
WSDL part.
Introduction
Any property that you promote from a primitive in the top level request, response,
or fault flow is also a dynamic property. A dynamic property can be overridden, at
run time, using a mediation policy. Although you can override promoted
properties dynamically, you must always specify a valid default value.
Alias names
Promoted properties have an alias name which, for a mediation primitive in the
top level request, response, or fault flow, is the name displayed on the runtime
Generally, you should choose a suitable alias name for your promoted properties
rather than accept the default name: choosing a suitable name helps you identify
properties at run time. For example, suppose that you have a single mediation
flow component containing one Service Invoke mediation primitive in the request
flow, and one Service Invoke mediation primitive in the response flow. If you
accept the default alias names, you cannot distinguish between the promoted
properties of the two Service Invoke primitives in the administrative console.
If you use a mediation policy, with your mediation flow, the mediation policy must
refer to a property using its alias name.
Property groups
Promoted properties belong to a group, and are displayed in their group on the
runtime administrative console.
By default, the group name is the mediation flow component name, but you can
override the default group name.
At integration time, you can use property groups to create collections of properties.
For example, suppose you have two mediation flow components in one mediation
module. If each component has a Message Logger primitive, you could promote
both of the Enabled properties to the same group. If the two promoted properties
have the same alias name then, at run time, you could administer the Enabled
properties together.
Alternatively, two promoted properties can have the same alias name but be in
different groups. At run time, you could then set the property values separately.
The follow tables shows the properties you can promote. Any promoted property
is also a dynamic property, if the property is in the top-level request, response, or
fault flow.
Table 5. The promotable properties of mediation primitives
Mediation primitive Promotable properties
Custom Mediation Value (in User Properties)
Runtime considerations
Introduction
Promoted properties have a group name, an alias name, a type, and value: all of
which you can set from IBM Integration Designer. Multiple promoted properties
can be given the same alias name if they are of the same type. You can see the
promoted properties on the runtime administrative console, and you can set the
property values administratively.
When you use mediation policies, the property group name is used to construct
the mediation policy namespace, and the alias name must map to an assertion
name in the mediation policy. For example, suppose a property has an alias name
of beforeRequestTransform and a group of stockQuoteGroup. The mediation policy
refers to the property with the name beforeRequestTransform, and uses the group
stockQuoteGroup as the namespace of the mediation policy assertion.
SMO context
After mediation policy information has been retrieved from the registry, the run
time stores it in the dynamicProperty context of the service message object (SMO).
The information in the dynamicProperty context can be used to override the values
of promoted properties that come later in the flow.
If you want to override a property dynamically you must take the following steps:
1. From IBM Integration Designer:
a. Promote the property you want to override.
b. Set the elements of the property. Pay particular attention to the group and
the alias name.
c. Add a Policy Resolution mediation primitive to your mediation flow. Add it
before the mediation primitive whose property you want to override.
d. Optional: If you want to retrieve mediation policies for a target service, you
should add an Endpoint Lookup primitive to the mediation flow, before the
Policy Resolution primitive.
e. Export the EAR file containing your Service Component Architecture (SCA)
module.
2. From WSRR:
a. Load the EAR file containing your SCA module. The registry creates an
SCA module document and object, and loads any default mediation
policies.
b. Optional: If you want to attach mediation policies to target services, rather
than to SCA modules, load the WSDL document that represents your target
services. The registry creates objects for any service, port, binding, portType,
operation, and message elements described by the WSDL; you can attach
mediation policies to some of these objects.
Tip: The run time supports mediation policies attached to service, port,
binding, and portType objects. However, the run time does not support
mediation policies attached to operation or message objects.
c. Optional: If the default mediation policies do not meet your requirements,
create a suitable mediation policy.
d. Attach a mediation policy to your SCA module object or to an object
associated with your WSDL.
3. From the administrative console:
a. Import the EAR file containing the SCA module.
b. Ensure that the server is using the correct registry instance.
When available, the mediation policy values take precedence at run time. If a
mediation policy is not found, or is unsuitable, the run time uses the promoted
property values shown on the administrative console.
If the dynamic override fails for a particular mediation primitive, the fail terminal
of that mediation primitive is fired. A dynamic override can fail for a number of
reasons. For example, if you try to override the XSL stylesheet of the XSL
Transformation primitive, but the stylesheet cannot be found at run time.
Introduction
A mediation subflow is created with IBM Integration Designer and consists of a set
of mediation primitives wired together in the same way as for a request or
response flow. The mediation subflow is defined as integration logic inside a
mediation module. The logic encapsulated inside the subflow can be started as
part of any request or response flow inside the module and can be reused in the
module multiple times. Alternatively, a mediation subflow can be defined in a
library and reused in multiple mediation modules that are dependent on that
library.
Details
IBM Integration Designer is used to define a mediation subflow inside the
integration logic of either a mediation module or dependent library. Unlike a
request or response flow, the inputs and outputs of a subflow are not tied to
specific interfaces. Instead, a subflow defines a number of in and out nodes
representing the inputs and outputs of the flow. These nodes might or might not
have message types associated with them. A subflow is then started from a request
or response flow via a Subflow mediation primitive. Multiple Subflow mediation
primitives can be associated with the same subflow. The input and output
terminals of the Subflow mediation primitive correspond to the in and out nodes
inside the subflow. The Subflow mediation primitive also has a fail terminal which
is invoked should an error occur during the execution of the subflow.
A subflow can contain the same mediation primitives as a request or response flow
with the exception of the Policy Resolution mediation primitive which can only be
used inside a top-level request or response flow. In particular, a subflow can
contain Subflow mediation primitives allowing the nesting of one subflow inside
another. Properties on mediation primitives inside a subflow can be promoted in
which case they then appear as promotable properties on the associated Subflow
mediation primitives. If these properties need to be modified at runtime then they
must be promoted up the hierarchy of Subflow mediation primitives until they are
promoted inside the top-level request or response flow at which point they will
appear as module properties.
Usage
Mediation subflows are used to encapsulate integration logic. The following list
gives some potential reasons for wanting to perform this encapsulation:
v To reduce complexity of the request or response flows when viewed in the
Mediation Flow Editor.
Properties
The names of any references contained inside the subflow are also surfaced on the
Subflow mediation primitive so that they can be mapped to references available
inside the mediation module where the subflow is being used.
Considerations
Correlation, transient and shared context are propagated from the parent flow to a
subflow and back to the parent but there is currently no mechanism to define the
type of a context on a subflow. Consequently, a subflow should use the Set
Message Type mediation primitive to type a context before use. Dynamic context is
not propagated from a parent flow to a subflow as the property names referred to
by the context are those at the module level not those applicable to the subflow.
Any Service Invoke mediation primitives inside a subflow are required to wait for
a service response when the flow component is invoked asynchronously with
callback.
Introduction
In topics describing mediation primitive properties, the name and XML value of a
property are marked with the following icons:
v This is the XML value that is stored to represent the displayed name
Introduction
You can use the Business Object Map mediation primitive to define message
transformations using a business object map; a business object map is made up of
an ordered set of transformations. For example, you might create a business object
map to process the message body in the following way:
1. Copy data from the first field in the input message, to the second field in the
output message.
2. Copy part of the second field in the input message, to the first field in the
output message.
3. Assign a constant value to the third field in the output message.
The Business Object Map mediation primitive has one input terminal (in), one
output terminal (out) and a fail terminal (fail). The in terminal is wired to accept a
message and the other terminals are wired to propagate a message. The input
message triggers a transformation and if the transformation is successful, the out
terminal propagates the modified message. If an exception occurs during the
transformation, the fail terminal propagates the original message, together with
any exception information contained in the failInfo element.
Details
When you create a business object map you specify the message root, (an XPath
1.0 expression), which for mediation flows can refer to the following locations in
the service message object (SMO): /, /headers, /context or /body. The message
root specifies the root of the transformations, and applies to both input messages
and output messages. If the message root is /, the transformation applies to the
whole SMO.
A business object map can use values in an input business object to assign values
to an output business object. Business object values are stored in fields: the fields
of the input business object are called the source fields, and the fields of the output
message are called the target fields.
Note: the Business Object Map mediation primitive can only create one business
object map.
Transform types
The business object map editor supports the following transform types, which are
also called mapping types:
You can use custom transforms to provide your own transformation logic. Custom
transforms use Java code. If there is a source field and a target field you use the
custom transform. If there is no source field you can use a custom assign
transform, which is similar to an assign transform, except that you use Java code
to decide what value to assign. If there is no target field to set, you can use a
custom callout transform to call Java code. The Java code might initialize values
before other transforms run.
Usage
You can use the Business Object Map mediation primitive to do the following:
v Transform an input message type to a different output message type. For
example, if the mediation flow starts with one operation but ends with another
operation, and the second operation has a different argument type.
v Alter the content of a message, without changing the message type.
v Apply your own logic to message transformations, by using the custom
transforms.
v Reuse existing business object maps. You can reuse maps as top-level maps, or
as submaps.
v Apply mappings defined as relationships, to the content of messages. You can
create, and reuse, relationships.
The Business Object Map mediation primitive can be useful if you need to
manipulate data, before or after the Database Lookup mediation primitive is
invoked.
You can transform messages using either the XSLT mediation primitive or the
Business Object Map mediation primitive. The key difference is that the XSLT
primitive performs transformations in XML, using a style sheet, whereas the
Business Object Map primitive performs transformations on business objects, using
Service Data Objects (SDO). If you have existing XSL style sheets you might be
able to reuse them with the XSLT primitive; and if you have existing business
object maps you might be able to reuse them with the Business Object Map
primitive. Some kinds of transformation are easier to perform in XSL, and others
using a business object map.
Root root:
An XPath 1.0 expression that specifies the root of the transformation. This property
is used for both the input message and the transformed message.
Specifies the name of the business object map that the mediation primitive uses.
The business object map is used to transform data between the input and output
business objects.
containsRelationships:
Specifies whether the Business Object Map mediation primitive uses a Business
Object map that uses the relationship capability.
Considerations:
v If the Mapping File property is not valid it causes an exception at run time.
v The order of transformations can be important.
v The business object map associated with the Business Object Map mediation
primitive can be stored in the mediation module, or in a library project that you
declare as a dependency of the mediation module.
v If you use dynamic relationships in the Business Object Map mediation primitive
across request and response flows, the display name of the primitive in the
response flow must match the corresponding primitive name in the request flow.
Introduction
You can use the Custom Mediation primitive to implement your own mediation
logic in Java code.
By default, a Custom Mediation primitive has one input terminal (in), one output
terminal (out), and one fail terminal (fail). However, you can add more input and
output terminals. The input terminals are wired to accept a message; the output
and fail terminals are wired to propagate the message. The input message is
passed as the input parameter to the Java code. If the operation returns
successfully, the response from the Java code is propagated to an output terminal.
If the operation returns unsuccessfully, the fail terminal propagates the original
message, together with any exception information.
Details
You can define your own properties in a Custom Mediation primitive by going to
the User Properties tab of the Properties view. You can add, edit and remove user
properties.
You create your Java code by going to the Details tab of the Properties view, and
adding either Java snippets or visual snippets. Because mediation primitives
process messages as service message objects (SMOs), the visual snippets for
processing messages are in the SMO services folder, in the Visual Snippet view.
You specify the Java imports your code needs on the Java Imports tab of the
Properties view.
For most mediation primitives, the mediation flow editor detects the message types
and only allows you to wire primitives that have compatible message types.
However, for Custom Mediation primitives, the editor does not know the message
types. Therefore, before you wire your Custom Mediation primitive, specify its
message types in the Terminal tab of the Properties view.
Migration
Any deployed version 6.0.x Custom Mediation flows continue to work without
migration or redeployment.
Version 6.0.2 Custom Mediation primitives that contain embedded Java or visual
code, behave as follows:
v Continue to work without migration.
v Are marked as deprecated, in WebSphere Integration Developer.
v Cannot have more than one input terminal and one output terminal.
v The Root property is now read-only and is no longer available in WebSphere
Integration Developer.
v Keep the method signature of the implementation as commonj.sdo.DataObject
execute(commonj.sdo.DataObject input1). The implementation is modifiable
using the embedded Java or visual editors.
v Can be migrated to version 6.1.x using the fix in the Problems view.
Usage
You can use the Custom Mediation primitive to do processing that is not covered
by other mediation primitives.
You can create complex routing patterns. For example, you could route the input
SMO to each of your output terminals, in a specific order. The following Java code
would fire out1 first, then out2, and lastly out:
out1.fire(smo);
out2.fire(smo);
out.fire(smo);
You can create complex transformations. For example, you could extract different
parts of the input message to send to the different output terminals.
In the same way as you can make use of multiple output terminals, you can also
make use of multiple input terminals. For example, you could create a
fault-handling node by creating multiple input terminals each for a different
terminal type, and in your Java code create a new output message that contains
the /context/failInfo/failureString information.
javaCode:
Specify the Java code that will be run by the Custom mediation. This is the same
information that is specified within the details section of the Custom mediation
primitive panel. Either this property or the javaClass property must be specified.
Operation operation:
Reference serviceReferenceName:
javaClass:
Specify the Java class that will be run by the Custom mediation. Either this
property or the javaCode property must be specified.
version:
You can define User Properties for your Custom Mediation. A user property must
have a name, type and value. After you have created a user property you can
promote it, so that the runtime administrator can change the value.
Name name
The name of your property. The
valid type is String.
Type type
The data type of your property. The
valid types are: String, Boolean,
Integer, Float, and XPath.
Value value
The value of your property.
Whether a value is valid, depends
on the data type you assign to the
user property.
Required required
If a property is marked as Required,
the run time checks to see if the
property is set. The valid type is
Boolean, with a value of true or
false. The default is false.
javaImports:
Specify a semi-colon delimited list of Java complete class names that need to be
imported for running the code specified for the javaCode property; for example:
javaImports="java.utils.List;java.utils.ArrayList".
Context
The following code sample shows you how to create a new ServiceMessageObject
object inside a Custom Mediation primitive. The Custom Mediation primitive is
used to replace the incoming DataObject object.
Requirements
You must know how to set up a mediation flow with a Custom Mediation
primitive.
Java imports
import javax.xml.namespace.QName;
import com.ibm.websphere.sibx.smobo.ServiceMessageObject;
import com.ibm.websphere.sibx.smobo.ServiceMessageObjectFactory;
Code sample
.
.
.
QName qName = new QName("http://Examples/Interface", "Operation1RequestMsg");
ServiceMessageObjectFactory smoFactory = ServiceMessageObjectFactory.eINSTANCE;
ServiceMessageObject smo = smoFactory.createServiceMessageObject(qName);
This code sample creates a new ServiceMessageObject object with empty headers
and a business object for the body. An exception occurs if the code sample fails.
Further information
To create the QName object, the user must import javax.xml.namespace.QName. The
new QName object must have two arguments, which can be found in the interface
WSDL file. The first is the namespace URI, which is the value for xmlns:tns in the
wsdl:definitions line. The second is either the input or the output, which is the
value for the message in the wsdl:input or wsdl:output.
Context
The following code sample shows you how to create a new JMS header for the
ServiceMessageObject object inside a Custom Mediation primitive. The Custom
Requirements
You must know how to set up a mediation flow with a Custom Mediation
primitive.
Java imports
import com.ibm.websphere.sibx.smobo.JMSHeaderType;
import com.ibm.websphere.sibx.smobo.ServiceMessageObjectFactory;
Code sample
.
.
.
ServiceMessageObjectFactory smoFactory = ServiceMessageObjectFactory.eINSTANCE;
JMSHeaderType jmsHeader = smoFactory.createJMSHeaderType();
This code sample creates a new JMS header. An exception occurs if the code
sample fails.
Context
The following code sample shows you how to create a new SOAP header for the
ServiceMessageObject object inside a Custom Mediation primitive. The Custom
Mediation primitive is used to replace the SOAP header inside an incoming
ServiceMessageObject object.
Requirements
You must know how to set up a mediation flow with a Custom Mediation
primitive.
Java imports
import com.ibm.websphere.sibx.smobo.SOAPHeaderType;
import com.ibm.websphere.sibx.smobo.ServiceMessageObjectFactory;
Code sample
.
.
.
ServiceMessageObjectFactory smoFactory = ServiceMessageObjectFactory.eINSTANCE;
SOAPHeaderType SoapHeader = smoFactory.createSOAPHeaderType();
This code sample creates a new SOAP header. An exception occurs if the code
sample fails.
The following code sample shows you how to create a new MQ header for the
ServiceMessageObject object inside a Custom Mediation primitive. The Custom
Mediation primitive is used to replace the MQ header inside an incoming
ServiceMessageObject object.
Requirements
You must know how to set up a mediation flow with a Custom Mediation
primitive.
Java imports
import com.ibm.websphere.sibx.smobo.MQHeaderType;
import com.ibm.websphere.sibx.smobo.ServiceMessageObjectFactory;
Code sample
.
.
.
ServiceMessageObjectFactory smoFactory = ServiceMessageObjectFactory.eINSTANCE;
MQHeaderType mqHeader = smoFactory.createMQHeaderType();
This code sample creates a new MQ header. An exception occurs if the code
sample fails.
Context
The following information tells you about accessing the MQCIH header data
within an inbound message.
Requirements
Results
There are several ways the MQCIH header data can be accessed and used from the
MQHeader. For example, a user application can use a Message Logger mediation
primitive to log the contents of the MQCIH, or use a Set Message Type mediation
primitive to set the type of the value field to a MQCIH. A user application can also
use a mapping node to copy data from the MQCIH to another field in the same
message.
Context
The following information tells you about querying data in a WSRR instance using
a Custom Mediation primitive. This allows the Custom Mediation primitive to add
information contained in WSRR to the ServiceMessageObject object.
Requirements
You must know how to set up a mediation flow with a Custom Mediation
primitive. You must have a WSRR instance defined in the server administrative
console and the WSRR instance must be available.
Java imports
import com.ibm.wsspi.sibx.mediation.wsrr.client.ServiceRegistryProxy;
import com.ibm.wsspi.sibx.mediation.wsrr.client.ServiceRegistryProxyFactory;
import com.ibm.wsspi.sibx.mediation.wsrr.client.data.ServiceRegistryDataGraphList;
import com.ibm.wsspi.sibx.mediation.wsrr.client.exception.ServiceRegistryProxyException;
import com.ibm.wsspi.sibx.mediation.wsrr.client.jaxrpc.types.DataGraphType;
import com.ibm.wsspi.sibx.mediation.wsrr.client.jaxrpc.BaseObject;
import com.ibm.wsspi.sibx.mediation.wsrr.client.jaxrpc.WSDLPort;
import com.ibm.wsspi.sibx.mediation.wsrr.client.jaxrpc.WSDLService;
import java.util.Iterator;
Code sample
.
.
String endpointAddress = null;
// Get the factory and then get the default instance of a ServiceRegistryProxy.
Can also use getServiceRegistryProxy("RegistryName")
ServiceRegistryProxy srProxy = ServiceRegistryProxyFactory.getInstance().getDefaultServiceRegistryProxy();
try
{
// Set up the WSRR XPath query
String query = "/WSRR/WSDLService/ports[binding(.)/portType(.)[@name = ’AccountManagement’ and
@namespace=’http://www.example.org/AccountManagement/’]]";
// If we have successfully retrieved an endpoint from WSRR, set it into the SMO
if(endpointAddress != null)
{
smo.setString("headers/SMOHeader/Target/address", endpointAddress);
}
.
.
Introduction
You can use the Data Handler mediation primitive to transform a targeted section
of a message. The starting point of a transformation is indicated by an XPath
expression.
The Data Handler mediation primitive can be used when you are trying to
integrate services, and parts of the message are in a format that is not easy to
manipulate with existing mediation primitives. The Data Handler primitive can
also be used to transform the data in a message into a business object (BO) that
can be manipulated by other mediation primitives. The part of the message not
referenced by the XPath expression is not modified.
The Data Handler primitive has one input terminal (in), one output terminal (out),
and a fail terminal (fail). The in terminal is wired to accept a message and the
other terminals are wired to propagate a message. The input message triggers a
transformation and if the transformation is successful, the out terminal propagates
the modified message. If an exception occurs during the transformation, the fail
terminal propagates the input message, together with any exception information
contained in the failInfo element.
Details
The Data Handler primitive gives you a simple mechanism for manipulating
messages, using the data handler framework in the run time. You can change any
section of the service message objects (SMO) using the Data Handler primitive.
You need to configure a data handler for this primitive. You can generate a
configuration file using the Binding Resource Configuration panel. You must
ensure that the data handler is compatible with the source and target elements
identified by the XPath expressions.
Usage
You can use the Data Handler primitive to convert a physical format, such as a
Text string inside a JMS Text Message object, into a logical BO structure and back
again. This allows the transformation of the physical format into the specific BO to
be completed in the mediation flow component, instead of the export and import.
You might commonly use the Data Handler primitive to do the following:
In a service gateway, you can inflate the messages to a concrete definition so that
body manipulation is possible. The Data Handler primitive provides this capability
for document literal wrapped and single-part document literal messages. To use
this capability, configure the Data Handler primitive in a request flow as detailed
in the example code below:
Source XPATH: /body/message/value
Target XPATH: /body
Data handler configuration: Point to a configuration of the XML DataHandler without any values specified for the root name
or namespace
Action: Convert from native data format to a business object
Specify the Data handler configuration which the primitive should use, as
created by the binding resource configuration wizard. The Data handler
configuration will be used at run time to select the correct data handler to be
invoked and pass in any associated parameter values.
Generally, you refine fields that are defined by the following XML schema weak
types: xsd:any,xsd:anyType and xsd:anySimpleType. You specify the XPath of the
message field you want to refine, and the data type to use. IBM Integration
Designer provides a graphical interface to help you specify the XPath and data
type. You can specify refinements for more than one message field, but duplicate
entries for the same message field are not allowed.
Action action:
Convert from native data format to a business object calls the transform method of
the data handler to deserialize the data. Convert from a business object to native
data format calls the transformInto method of the data handler to serialize the
data.
business object 0
data format 1
Note:
Default Convert from native data format to a
business object
The location of the object, in the source SMO, that should be passed into the data
handler, for example, /body/operation1/input1/value.
The location, in the target SMO, where the output from the data handler should be
stored. (The target SMO is defined as the type of the output terminal.)
Introduction
The Database Lookup mediation primitive can add to, or change, messages. It does
this using information from a user-supplied database.
The Database Lookup mediation primitive has one input terminal (in), two output
terminals (out and keyNotFound), and a fail terminal (fail). The in terminal is
wired to accept a message and the other terminals are wired to propagate a
message. The out terminal is used if the key is located both in the message and the
database. In this case, the information obtained from the database is stored in the
message and the updated message is propagated. The keyNotFound terminal is
used if the key is found in the message, but not in the database. In this case, the
original message is propagated unchanged. If an exception occurs during the
processing of the input message, the fail terminal propagates the original message,
together with any exception information.
Details
Given a database key, the Database Lookup mediation primitive looks up values in
a database and stores them as elements in the message.
The information obtained from the database might need converting before it can be
stored in the message; you can specify the information type using the Type
property. At run time, if the information obtained from the database cannot be
converted to the type expected by the message, an exception occurs.
Usage
You can use the Database Lookup mediation primitive to ensure information in a
message is up-to-date.
You can use the Database Lookup mediation primitive to add information to a
message, using a key contained in a message. For example, the key could be an
account number.
It is often useful to combine the Database Lookup mediation primitive with other
mediation primitives. For example, you might use an XSL Transformation (XSLT)
mediation primitive to manipulate data, before or after the Database Lookup is
invoked.
Table tableName:
The name of the database table, including the schema name; for example,
myschema.mytable.
The name of the database's primary key column. The specified Search column must
contain a unique value; multi-column database keys are not supported. In addition,
the unique value must be of the same element type as the value located in the
message using the Search location.
Where, in the input message, to find the database key. Specified as an XPath 1.0
expression; the value returned from the XPath expression is used as the key into
the database.
If you select the check box, the input message is validated before the mediation is
performed.
Column valueColumnName
The name of the database column
from which to copy information.
The valid type is String.
Type messageValueType
The information type: the only
supported String types are a simple
XML schema type, or an XML
schema type that extends a simple
XML schema type. At run time, the
value obtained from the database is
converted to the type defined by
the Type property.
Java primitive types or string are
supported only for compatibility
with old mediation flows.
Introduction
You can use the Endpoint Lookup mediation primitive to retrieve service endpoint
information from a WSRR registry that can be local or remote. The service
endpoint information can relate directly to Web services, or to Service Component
Architecture (SCA) module exports, or to manually defined MQ, JMS and HTTP
services.
In order to use the Endpoint Lookup mediation primitive you might need to add
service endpoint information to your WSRR registry. If you want to extract service
endpoint information about Web services, your WSRR registry must contain either
the appropriate WSDL documents, or SCA modules that contain exports using Web
service bindings. If you want to extract service endpoint information about exports
that use the other SCA bindings, your WSRR registry must contain the appropriate
SCA modules. If you want to extract service endpoint information about services
that are accessed using MQ, JMS or HTTP but are not defined in a SCA module,
your WSRR registry must contain an appropriate “Manual JMS/HTTP/MQ
endpoint with associated Interface”Business Object with the endpoint information
and associated interface correctly set.
The Endpoint Lookup mediation primitive lets you retrieve service endpoint
information that relates to the following:
v Web services using SOAP/HTTP.
v Web services using SOAP/JMS.
v SCA module exports with Web service bindings, using SOAP/HTTP.
v SCA module exports with Web service bindings, using SOAP/JMS.
v SCA module exports with the default SCA binding.
v SCA module exports with the MQ binding.
v SCA module exports with the MQ JMS binding.
v SCA module exports with the JMS binding.
v SCA module exports with the Generic JMS binding.
v SCA module exports with the HTTP binding.
v Manually defined MQ endpoints with associated interfaces.
v Manually defined JMS endpoints with associated interfaces.
v Manually defined HTTP endpoints with associated interfaces.
The Endpoint Lookup mediation primitive has one input terminal (in), two output
terminals (out and noMatch), and a fail terminal (fail). The in terminal is wired to
accept a message and the other terminals are wired to propagate a message. If an
exception occurs during the processing of the input message, the fail terminal
propagates the input message, together with any exception information. If service
endpoints are retrieved from the registry, the out terminal propagates the original
service message object (SMO) modified by the service endpoint information. If no
services are retrieved from the registry, the noMatch terminal propagates an
unmodified message.
Note: In order for the run time to implement dynamic routing on a request, you
must set the Use dynamic endpoint if set in the message header property in the
callout node or Service Invoke mediation primitive. You can specify a default
endpoint that the run time uses if it cannot find a dynamic endpoint. You specify a
default endpoint by wiring an import that has the default service selected.
Details
The Endpoint Lookup mediation primitive uses the Endpoint Reference structure
defined by the WS-Addressing specification. For more information, see:
http://schemas.xmlsoap.org/ws/2004/08/addressing.
Updates made to the SMO by the Endpoint Lookup mediation primitive are
dependant on the success of the registry query (matches are found during the
registry query) and the match policy.
The Endpoint Lookup mediation primitive can make updates to both the SMO
context (the primitiveContext element) and to SMO headers:
v /headers/SMOHeader/Target/address.
– Can contain the address of a service to invoke dynamically (the dynamic
callout address).
v /context/primitiveContext/EndpointLookupContext.
– Can contain the results of the WSRR query.
v /headers/SMOHeader/AlternateTarget
– Can contain a list of alternate service addresses. For more information on the
retry function, see: Combining multiple services.
If the Endpoint Lookup mediation primitive updates the SMO with one or more
endpoint addresses, it will also update the SMO so that each endpoint address has
an associated bindingType. The bindingType set by the Endpoint Lookup mediation
primitive can be either WebService,SCA, MQ, MQJMS, JMS, GenericJMS, or HTTP.
Usage
You can use the Endpoint Lookup mediation primitive to dynamically route
messages based upon customer classification. For example, you might want
messages for customers of type A routed to URL X, and messages for customers of
type B routed to URL Y. If you set up your registry to key URLs against customer
types, then you can dynamically route customer requests according to customer
type.
If you have created versioned SCA modules, and have loaded these SCA modules
into WSRR, you might want to use the latest compatible version of the service
provider that is available.
To find the latest compatible version of the provider service, use the Return
endpoint matching latest compatible service version Match Policy. In this case,
the mediation primitive queries the registry for all SCA module exports with the
supplied properties.
For example, if WSRR returns a list of services with the following version
numbers:
1.0.0
1.0.1
1.0.2
SOAP/HTTP example
The URI format in the case of an export with a Web service binding, is as follows:
http://<host>:<port>/<moduleName>/sca/<exportName>
The URI format in the general Web service case, (when a Web service is not
implemented by an export with a Web service binding), is as follows:
http://<host>:<port>/<service>
SOAP/JMS example
The URI format in the case of an export with a Web service binding, is as follows:
jms:/queue?destination=jms/WSjmsExport&connectionFactory=jms/WSjmsExportQCF&targetService=WSjmsExport_ServiceBJmsPort
The URI format in the general Web service case, (when a Web service is not
implemented by an export with a Web service binding), is as follows:
jms:/queue?destination=<destName>&connectionFactory=<factory>&targetservice=<service>
The URI format in the case of an export with the default SCA binding, is as
follows:
sca://<moduleName>/<exportName>
For services that use the SCA binding, the URI is never physically present in the
resources that define that service.
Name portTypeName:
Search the registry for services that implement a particular portType, the name of
which is specified by the Name property.
Namespace portTypeNamespace:
Search the registry for services that implement a particular portType, the
namespace of which is specified by the Namespace property. The Namespace can be
specified in any valid namespace format, for example, URI or URN.
If the registry has more than one service matching your query, the Match Policy
determines how many service endpoints should be added to the message. It also
determines whether the matching should select exact matches or the latest
compatible service versions.
If no match is found during a registry query, regardless of the match policy, the
noMatch output terminal is fired and the original input SMO is propagated.
The following table summarizes the effect of the match policy on the SMO
elements:
Table 11. Effect of match policy on SMO
Match
Policy First All All and Alternates Latest compatible
Output out noMatch out noMatch out noMatch out noMatch
terminal
Target update no action clear no action update no action update no action
Primitive update no action update no action update no action update no action
context
Alternate clear no action clear no action update no action clear no action
targets
routing target 1
v If the registry query returns
matches, the following occurs:
– The dynamic callout address,
in the SMO header, is updated
with one service address from
the results returned.
– The SMO context is updated
with registry information
relating to the address in the
dynamic callout address.
– The alternate targets list, in the
SMO header, is cleared.
Search the registry for services that match a specified binding type, WebServices
and SCA bindings together, or any binding type.
Web Services 1
MQ 2
MQ JMS 3
JMS 4
Generic JMS 5
HTTP 6
SCA 7
Version portTypeVersion:
The Version property can be used in one of two ways. If you are using portTypes,
this field holds the version of the portType. However, if you are using versioned
SCA modules, this field holds the version of the SCA module.
If you are using portTypes, the registry is searched for services that implement a
particular portType.
For versioned SCA modules, the format of the Version property is: V.R.M, where V
is an integer version number, R is an integer release number, and M is an integer
modification number. The version supplied can include one or more wildcard (*)
characters. For example, a Version property of 1.0.* will find the endpoint of the
Module moduleName:
The SCA module name. Only used when the Match Policy property is set to
Return endpoint matching latest compatible service version. Search the
registry for SCA modules with a name that matches the unversioned name of the
specified Module.
Export exportName:
The SCA export name. Only used when the Match Policy property is set to Return
endpoint matching latest compatible service version. Search the registry for
SCA exports with a name that matches the specified Export name.
Note:
v If you want to extract endpoint information about exports that use the default
SCA binding then, in WSRR, you must specify any classifications on the SCA
Export objects that you want to retrieve. Make sure that you put classifications
on the SCA Exports and not on the SCA Export documents.
v If you want to extract endpoint information about Web services then, in WSRR,
you must specify any classifications on the appropriate WSDL Port objects. The
WSDL Port objects must implement a port type that you describe using the
Endpoint Lookup properties relating to portType.
The WSRR classifies objects using the ontology classification system (OWL), in
which each classifier is a class and has a URI. OWL implements a simple
hierarchical system. For example, a bank account could start with the following
details:
v Account
– Identifier
– Name
Search the registry for services that are annotated with user-defined properties.
Note:
v If you want to extract endpoint information about exports that use the default
SCA binding then, in WSRR, you must specify any user properties on the SCA
Export objects that you want to retrieve. Make sure that you put user properties
on the SCA Exports and not on the SCA Export documents.
v If you want to extract endpoint information about Web services then, in WSRR,
you must specify any user properties on the appropriate WSDL Port objects. The
WSDL Port objects must implement a port type that you describe using the
Endpoint Lookup properties relating to portType.
Name name
The name of the user defined
property. The valid type is String.
Type type
The type of the user-defined
property. If the type is String, then
what you specify as the Value is
used as a literal, in the search
query. If the type is XPath, then
what you specify as the Value must
be an XPath expression. The XPath
expression must resolve to a unique
leaf node in the inbound SMO. The
value of the leaf node is used in the
search query.
Value value
The value of the user defined
property. This can be either a literal
value or an XPath expression,
depending upon the Type property.
Introduction
You can use the Event Emitter mediation primitive to send out events, during a
mediation flow. Because the events are generated to conform to the Common Base
Event specification, they have a standard XML-based format. The events are sent to
a Common Event Infrastructure (CEI) server. For information on CEI resources and
services refer to the runtime product documentation and runtime online help.
You can decide whether generated events contain the message being processed, or
not.
v If you set the Root property, you can generate events that contain all or part of
the message being processed. The Root property is an XPath expression that
specifies the section of the Service Message Object (SMO) that is included in the
event. The Root property can specify the complete SMO, a subtree or a leaf
element.
Note: At run time, if the Root value does not match any of the elements in the
SMO, the message being processed is not included in the generated event.
v If you do not set the Root property, any generated events do not contain the
message being processed.
The Event Emitter mediation primitive has one input terminal (in), one output
terminal (out) and a fail terminal (fail). The in terminal is wired to accept a
message and the other terminals are wired to propagate a message. The out
terminal propagates the original message. If an exception occurs during the
processing of the input message, the fail terminal propagates the original message,
together with any exception information contained in the failInfo element.
Details
If a generated event contains the message being processed, the event format can be
version 6.1 or version 6.0.2. By default, Event Emitter mediation primitives emit
events in version 6.1 format, when running on a version 6.1 system.
The version 6.1 format, stores the message as an XML element in the xsd:any slot
of an event. Because the message is stored in an xsd:any slot, you can retrieve it as
an XML instance and use existing XML technologies to process it in an efficient
way.
The version 6.0.2 format stores the message in the event's extended data elements,
after either shredding the message or serializing it as a hex value. You can choose
to emit events in the version 6.0.2 format, by choosing the appropriate setting in
WebSphere Integration Developer.
To fully use Event Emitter information, event consumers need to understand the
structure of the Common Base Event specification. The mechanism for modeling
6.1 format
A 6.1-format event, generated by the Event Emitter primitive, can contain the
elements described in the following table.
Table 12. 6.1-format event elements
Element Sub-elements Description
event eventPointData ModuleName The name of the
module instance
MediationName The name of the
Event Emitter
instance
EventEmitterLabel The Label property
value
Root The Root property
value
applicationData content value Application data
Note:
v If a generated event does not contain the message being processed, the
applicationData child element of the event element is absent.
v If the Root property of the Event Emitter primitive specifies a single leaf
element, the applicationData element contains the value of the leaf element. At
run time this would generate the following event element: event/
applicationData/content/value.
v If the Root property of the Event Emitter primitive specifies a business object,
the applicationData element stores the specified business object. For example, if
the business object contained an accountID and creditLimit, at run time this
would generate event elements: event/applicationData/content/value/
accountID and event/applicationData/content/value/creditLimit.
6.0.2 format
A 6.0.2-format event, generated by the Event Emitter primitive, always contains the
three extended data elements described in the following table.
Table 13. 6.0.2-format mandatory extended data elements
Extended Data Element Description
ModuleName The name of the module instance
MediationName The name of the Event Emitter instance
Root The Root property value
Usage
Use the Event Emitter mediation primitive to indicate an unusual event. For
example, notification of a failure in the flow or an unusual path in the flow. When
you place event emitters in a flow you should consider the possible number of
events that can be generated, and the size of the message that will be stored in the
event. Placing an event emitter in the normal flow path generates many events
compared to placing it in an unusual event or failure path. In addition, consider
configuring the emitter to store only significant message data, rather than the
complete message, to reduce the overall size of the event.
You can use the Event Emitter mediation primitive to record a failure in another
mediation primitive, and then continue processing. To do this you wire the fail
terminal of the previous mediation primitive to the input terminal of the Event
Emitter mediation primitive; and wire the output terminal of the Event Emitter
mediation primitive to the next step in the flow.
You can also use the Event Emitter mediation primitive, in combination with the
Message Filter mediation primitive, to generate business events based on message
content. To do this you wire one of the output terminals of the Message Filter
mediation primitive to the input terminal of the Event Emitter mediation primitive;
and wire the output terminal of the Event Emitter mediation primitive to the next
step in the flow.
The events emitted by the Event Emitter mediation primitive conform to the
Common Base Event specification 1.0.1 structure. The event specification is part of
the IBM Autonomic Computing Toolkit.
<xsd:complexType name="MySubBO">
<xsd:sequence>
<xsd:element name="value1" type="xsd:string"/>
<xsd:element name="value2" type="xsd:int"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="MySubBO">
<xsd:sequence>
<xsd:element name="value1" type="xsd:string"/>
<xsd:element name="value2" type="xsd:int"/>
</xsd:sequence>
</xsd:complexType>
Note: If the Root property is not specified as part of the Event Emitter
primitive property configuration, or if Root is specified but the property
does not evaluate to an element in the message, the Message extended data
element does not appear in the event.
Enabled enabled:
By default, the mediate action of this mediation primitive is enabled. You can
suspend the mediate action by clearing the check box.
Label label:
The default is a combination of module name, flow name, and flow type. The flow
type indicates whether the flow is a request or a response.
Root root:
An XPath 1.0 expression representing the part of the message to be included in the
event.
Lets you override the transaction mode set on the emitter. (An event source, such
as an Event Emitter mediation primitive, does not interact directly with the event
server; instead it interacts with an object called an emitter.) The transaction mode
can be configured in the CEI infrastructure or overridden at the Event Emitter
mediation primitive level.
Existing 1
Events are sent to the CEI server in
the flow's transaction.
New 2
Events are sent to the CEI server
outside the flow's transaction.
Note:
Default Default
Considerations:
v If a problem occurs when an event is sent to the CEI server, a runtime exception
occurs and the fail terminal of the mediation primitive is triggered.
v If you use the Event Emitter mediation primitive to record a failure in another
mediation primitive, and then explicitly cause the flow to fail (by wiring the
Event Emitter out terminal to the Fail mediation primitive), you must consider
the runtime implications. If the mediation module is configured to run inside a
global transaction, the Event Emitter mediation primitive must be configured to
send events in a New transaction. Otherwise, the event created by the Event
Emitter mediation primitive could be rolled back (and lost).
Introduction
You can use the Fail mediation primitive to halt a mediation flow and raise an
exception at a point you choose in the flow. You can also add information about a
failure.
The Fail mediation primitive has one input terminal (in). The input terminal is
wired to accept a message that triggers a FailFlowException. A FailFlowException
is a specific runtime exception that causes the flow instance to fail. You can wire
the output terminal of another mediation primitive to the in terminal of a Fail
mediation primitive, to cause a FailFlowException.
You can use the Fail mediation primitive to define your own error conditions,
based on the business logic of the flow.
You can use the Error message property to provide an additional error message
that is specific to your business logic or domain. The Error message you create is
added to the automatically generated exception.
You can use the Root and Error message properties to specify that information
from the service message object (SMO) is also included in the Error message, so
that the automatically generated exception contains dynamic information about the
state of the SMO. The default value for the Error message property is {0}, {1}, {2},
{3}, {4}, {5} meaning that:
v {0} would be substituted with the Time Stamp value
v {1} would be substituted with the SMO Message ID value
v {2} would be substituted with the Mediation Name value
v {3} would be substituted with the Module Name value
v {4} would be substituted with the SMO part defined by the Root property XPath.
By default this is /context/failInfo
v {5} would be substituted with the SMO Version value
A generated FailFlowException for the default Error message and Root would
therefore contain information such as:
’29/04/09 15:11, 9A85B1D2-0119-4000-E000-13E4091443BC, Fail1, MyModule, <failInfo>...</failInfo>, 6’
You can use the Fail mediation primitive to roll back a global transaction under
certain conditions. For example, if you wire an output terminal of a Message Filter
mediation primitive to a Fail mediation primitive, the transaction is rolled back if
the filter condition occurs.
An XPath 1.0 expression representing the scope of the message to be inserted into
the Error message at insert {4}. You can specify your own XPath expression. The
default value is the SMO failInfo data at /context/failInfo. If you specify your own
XPath expression, the part of the (SMO) you specify is inserted. The message to be
logged is converted to XML from the point specified by Root.
Introduction
You can use the Fan In mediation primitive to combine multiple messages, which
you create using the Fan Out mediation primitive.
The Fan In mediation primitive receives messages until a decision point is reached,
then the last message to be received is propagated to the output terminal. Three
types of decision point are supported:
v Simple count. When a set number of messages are received at the input
terminal, the Fan In mediation primitive fires the output terminal. The firing of
the Fan In output terminal does not stop the Fan Out mediation primitive from
sending messages. Therefore, the decision point might be reached more than
once, resulting in multiple firings of the output terminal.
92 IBM WebSphere ESB: Reference
v XPath decision. If an XPath evaluation of the input message evaluates to true,
the Fan In mediation primitive fires the output terminal.
v Iterate. The Fan In mediation primitive waits to receive all of the messages
produced by the corresponding Fan Out mediation primitive, and then fires the
output terminal.
The Fan In mediation primitive has two input terminals (in and stop), two output
terminals (out and incomplete) and a fail terminal (fail). The input terminals are
wired to accept a message and the other terminals are wired to propagate a
message.
If an exception occurs during the processing of the input message, the fail terminal
propagates the input message, together with any exception information.
The in terminal accepts input messages until a decision point is reached. When a
decision point is reached the out terminal is fired. Although the out terminal is
fired when a decision point is reached, the in terminal can still accept input
messages, under some circumstances. For example, if the count value is reached,
the out terminal is fired. However, if the Fan Out mediation primitive is still
sending messages, the Fan In mediation primitive can still receive messages.
You can stop a Fan In operation by wiring the preceding mediation primitive to
the Fan In stop terminal. The stop terminal causes the incomplete output terminal
to be fired, and this stops the associated Fan Out mediation primitive from sending
any more messages. The incomplete output terminal is also fired if a timeout
occurs.
Details
When the out terminal of the Fan In mediation primitive is fired it propagates the
last message received on its in terminal. The shared context is also propagated; it is
the shared context area of the service message object (SMO) that enables the
aggregation facility. However, in order to aggregate data, you must transform, or
map, the information in the message and shared context into a form appropriate
for the flow downstream of the Fan In mediation primitive. Equally, you might
need to transform, or map, information from the shared context after the Fan Out
mediation primitive before wiring paths to the in terminal of the Fan In mediation
primitive.
The Fan In mediation primitive can only be used in combination with the Fan Out
mediation primitive and, therefore, it must always have an associated Fan Out
mediation primitive. IBM Integration Designer allows you to select which Fan Out
mediation primitive to associate with each Fan In mediation primitive, and
displays the associated Fan Out mediation primitive next to the details of the Fan
In properties.
Usage
The Fan In mediation primitive lets you wait until certain conditions are met,
before processing information created previously in the mediation flow. In this
way, the Fan In mediation primitive lets you create various decision points (based
upon counts, XPath evaluations and iterations). It does not change the input
message in any way.
After the Fan In mediation primitive you might want to transform, or map,
information from the shared context into the body of the message (the SMO /body).
You can use the Fan Out and Fan In mediation primitives to aggregate (combine)
the responses from two service invocations into one output message. For example,
you can retrieve a customer credit score from two credit agencies, then average the
two scores.
The shared context area of the SMO is a global storage area you can use to
aggregate data. The shared context is a thread-based storage area: it is shared by
all SMOs that are created for a particular thread. However, the shared context
business object does not persist across a request and response flow, through callout
invocation; whatever data is in the shared context of the request flow cannot be
reused during the response flow.
Like the transient and correlation context, the shared context is defined as a
user-provided business object on the input node of the mediation flow. After you
have defined the shared context you can use it to store data during aggregation
operations. You need to design the shared context business object carefully, to
ensure it is suitable for all aggregation scenarios in a specific flow.
You can stop a Fan In operation by wiring a preceding mediation primitive to the
stop input terminal; for example, if a Service Invocation between the Fan Out and
Fan In mediation primitives fails. This will cause the incomplete terminal to be
fired, and will stop the corresponding Fan Out mediation primitive from firing any
more messages.
The following example shows two service calls being made and their responses
being stored in the shared context. The shared context is then used to create a final
message. The Fan Out mediation primitive is used in the default mode, and the
Fan In mediation primitive has a Count property of 2. This is a simplified example,
and does not show all the terminal wiring.
1. FanOut1 fires the input terminal of XSLT1.
2. XSLT1 creates the appropriate request message for Service A, and fires the
input terminal of ServiceInvoke1.
3. ServiceInvoke1 calls Service A, and fires the input terminal of XSLT3.
4. XSLT3 maps the response from Service A into the shared context and fires the
input terminal of FanIn1, for the first time.
5. Because the FanIn1 count value has not been reached the mediation flow
tracks back to the flow path split, at FanOut1.
6. FanOut1 fires the input terminal of XSLT2.
7. XSLT2 creates the appropriate request message for Service B, and fires the
input terminal of ServiceInvoke2.
8. ServiceInvoke2 calls Service B, and fires the input terminal of XSLT4.
9. XSLT4 maps the response from Service B into the shared context and fires the
input terminal of FanIn1 for the second time.
10. The FanIn1 decision point is now met (the count value has been reached).
Therefore, the FanIn1 primitive fires the input terminal of XSLT5.
Figure 3. Aggregating data using Fan Out, XSLT, Service Invoke and Fan In
The identifier corresponding to the Fan Out mediation primitive that is associated
with the Fan In.
XPath 1
An XPath expression.
Iterate 2
The Fan In mediation primitive
waits to receive all the messages
produced by the corresponding Fan
Out mediation primitive, when the
Fan Out primitive is in iterate mode.
The valid type is Boolean: true or
false.
Note:
Default Count
The time, in seconds, by which the decision point must be reached. The timeout
period starts when the associated Fan Out fires an output terminal for the first
time. If a message arrives at the Fan In in terminal after this timeout period, it is
considered as being late and the incomplete terminal is fired.
Count count:
The XPath expression that must evaluate to true before firing the out terminal.
Considerations:
v The Async Timeout values for Service Invoke mediation primitives should not
exceed the value of the Timeout property of the aggregation's Fan In.
v Certain properties are promotable; therefore, you can change their value from
the runtime administrative console. The following properties can be promoted:
– Count
– XPath
– Timeout
Introduction
You can use the Fan Out mediation primitive to iterate through an input message
that contains a repeating element, and store each instance of the repeating element
in the service message object (SMO) context. The Fan Out mediation primitive does
not change the body of the input message.
The Fan Out mediation primitive has one input terminal (in), two output terminals
(out and noOccurrences), and a fail terminal (fail). The in terminal is wired to
accept a message and the other terminals are wired to propagate a message.
If an exception occurs during the processing of the input message, the fail terminal
propagates the input message, together with any exception information.
In default mode, the out terminal is used to propagate the input message and the
terminal is fired only once. In iterate mode, the out terminal is also used to
propagate the input message, but the terminal is fired once for each occurrence of
Details
When in iterate mode the out terminal is fired once for each occurrence of the
repeating element that you specify, using an XPath expression. Each occurrence of
the repeating element is stored in a FanOutContext field.
Usage
You can use the Fan Out and Fan In mediation primitives to aggregate the
responses from two service invocations into one output message. For example, you
could retrieve a customer credit score from two credit agencies, then average the
two scores.
The shared context area of the SMO is for storing aggregation data between a Fan
Out primitive and a Fan In primitive. The shared context is a thread-based storage
area that is shared in the same thread. The content of the shared context business
object does not persist across a request flow and a response flow, through callout
invocation. Whatever data is in the shared context of the request flow cannot be
reused during the response flow. Therefore, you can only aggregate in a particular
flow: a Fan In mediation primitive in a response flow cannot be used to aggregate
messages from a Fan Out mediation primitive in a request flow.
Like the transient and correlation context, the shared context is defined as a
user-provided business object on the input node of the mediation flow. After you
have defined the shared context you can use it to store data during aggregation
operations. You need to design the shared context business object carefully, to
ensure it is suitable for all aggregation scenarios in a specific flow.
The Fan In primitive allows for the aggregation of data that results from the use of
a Fan Out primitive. You can aggregate data by processing the shared context,
using transformations or mappings in other mediation primitives.
Fan Out can be used on its own, or with the Fan In mediation primitive. After a
Fan In primitive has been associated with a Fan Out primitive, its properties
appear with the properties of the Fan Out primitive.
Note: The Fan In mediation primitive cannot be used without the Fan Out
mediation primitive.
The following example shows two service calls being made and their responses
being stored in the shared context. The shared context is then used to create a final
message. The Fan Out mediation primitive is used in the default mode, and the
Fan In mediation primitive has a Count property of 2. This is a simplified example,
and does not show all the terminal wiring.
1. FanOut1 fires the input terminal of XSLT1.
2. XSLT1 creates the appropriate request message for Service A, and fires the
input terminal of ServiceInvoke1.
3. ServiceInvoke1 calls Service A, and fires the input terminal of XSLT3.
4. XSLT3 maps the response from Service A into the shared context and fires the
input terminal of FanIn1, for the first time.
5. Because the FanIn1 count value has not been reached the mediation flow
tracks back to the flow path split, at FanOut1.
6. FanOut1 fires the input terminal of XSLT2.
7. XSLT2 creates the appropriate request message for Service B, and fires the
input terminal of ServiceInvoke2.
8. ServiceInvoke2 calls Service B, and fires the input terminal of XSLT4.
9. XSLT4 maps the response from Service B into the shared context and fires the
input terminal of FanIn1 for the second time.
10. The FanIn1 decision point is now met (the count value has been reached).
Therefore, the FanIn1 primitive fires the input terminal of XSLT5.
11. XSLT5 uses the shared context to create a new message body in the SMO.
Figure 4. Aggregating data using Fan Out, XSLT, Service Invoke and Fan In. The Fan Out mediation primitive is wired
to two XSL Transformation (XSLT) mediation primitives. Each XSLT mediation primitive is wired to a different Service
Invoke, which calls a service. The Service Invokes are wired to different XSLT mediation primitives, and these are both
wired to one Fan In mediation primitive. The Fan In waits until both service invocations have been completed before
firing a final XSLT mediation primitive.
Mode mode:
Required Yes
Valid values
Once 0
The input message is propagated
only once, from the out terminal.
Iterate 1
The iterate mode requires you to
specify the XPath location of a
repeating element in the input
message.
Note:
Default Once
When in iterate mode, the out terminal is fired once for each occurrence of the
repeating element that you specify, using an XPath expression.
For example, if the body of the SMO contained the following repeating element
/body/input/accounttype[], an input message might contain the following account
types: /body/input/accounttype[0]=gold and /body/input/
accounttype[1]=platinum. If you set the XPath to /body/input/accounttype[], the
out terminal would be fired twice: once with the FanOutContext containing gold
and once with the FanOutContext containing platinum. If the input message does
not contain any occurrences in the repeating element, the noOccurrences terminal
is fired.
Required No
Valid values XPath
Note:
This property applies only to Fan Out mediations that are part of an aggregation.
The purpose of this property is to control the rate at which asynchronous
responses (for any asynchronous Service Invoke requests made in the aggregation
block) are collected and processed.
Note: The value of this property does not affect the number of times the Fan Out
will fire.
Required Yes
Considerations: The Fan Out mediation primitive has a single promoted property,
Batch Count, whose value you can change from the runtime administrative
console.
Note: When setting this property in the runtime console, the special value 0
signifies that asynchronous responses are to be handled after the Fan Out
mediation has completed all of its terminal firing.
You can use the Flow Order mediation primitive to customize the flow by
specifying the output terminals to be fired. The Flow Order mediation primitive
does not change the input message.
The Flow Order mediation primitive has one input terminal (in) and any number
of output terminals. The in terminal is wired to accept a message and the other
terminals are wired to propagate a message.
Details
The output terminals are fired in the order that they are defined on the primitive,
with each branch completing before the next starts. Each output terminal is fired
with an unmodified copy of the input message.
Usage
You can use the Flow Order mediation primitive anywhere that you want to
branch a flow. A named output terminal is created for each branch. Each branch of
the flow is completed before the next terminal is fired. The only exception to this
rule is that if an Asynchronous Service Invoke mediation primitive is reached, the
service is called but the Flow Order does not wait for the service invocation to
complete before moving on to the next branch. If an exception is thrown by any of
the down stream mediation primitives and they do not handle it themselves (for
example, via their fail terminal), any remaining flow branches are not fired.
terminals:
Specify a caret-delimited (^) list of the order terminals to be fired. For example,
terminals="out1^out2" will cause terminal out1 to be fired followed by out2.
Introduction
You can use the Gateway Endpoint Lookup mediation primitive to create various
types of service gateway.
You can create a proxy gateway that routes requests to virtual services defined in
proxy groups. A proxy gateway is a specific type of service gateway that defines
proxy groups with which the administrator can associate real endpoints.
Alternatively, you can create other types of services gateways that route to Web
service endpoints, based on the action specified in the message.
If you create a proxy gateway, the service definitions are stored in a built-in
configuration store. If you create another type of service gateway, the service
definitions are stored in WebSphere Service Registry and Repository (WSRR).
The Gateway Endpoint Lookup mediation primitive has one input terminal (in),
two output terminals (out and nomatch) and a fail terminal (fail). The in terminal
is wired to accept a message and the other terminals are wired to propagate a
message. If a suitable endpoint is found, the out terminal propagates a message
that includes the endpoint. If an endpoint is not found, the nomatch terminal is
fired. You can wire the nomatch terminal to another mediation primitive that
decides where to forward the message to. If an exception occurs during the routing
resolution, the fail terminal propagates the input message, together with any
exception information contained in the failInfo element.
Details
The Gateway Endpoint Lookup mediation primitive uses the Endpoint Reference
structure defined by the WS-Addressing specification. For more information, see:
http://schemas.xmlsoap.org/ws/2004/08/addressing.
In IBM Integration Designer, you can use the Patterns Explorer view of the
Business Integration perspective to create a proxy gateway. Use the menu options:
Window > Show View > Patterns Explorer > Proxy Gateway. The Patterns
Explorer creates a mediation flow that includes the Gateway Endpoint Lookup
mediation primitive.
To create a proxy gateway you must define one or more proxy groups, using the
Gateway Endpoint Lookup mediation primitive. When you install the proxy
gateway module to WebSphere Enterprise Service Bus (WebSphere ESB) or IBM
Business Process Manager, the proxy groups are created in a built-in configuration
store.
Using the Gateway Endpoint Lookup mediation primitive, you must also specify a
point in the request message where the name of a virtual service can be found. A
virtual service is a proxy for one or more real services. You specify whether the
virtual service name is found using the inbound URL, which is the default, or an
XPath. After you have installed the proxy gateway module, you can use a Business
Space widget to define the virtual services in the proxy groups. Using the Proxy
Gateway widget, you create associations between the virtual services and the real
service endpoints; the associations are stored in the built-in configuration store.
Before a client can access a proxy gateway, it needs the WSDL to call a virtual
service. You can retrieve the WSDL by entering the endpoint of the virtual service
URL in a Web Browser, and appending the string: ?wsdl. For example,
http://zzz/Gold?wsdl, where http://zzz/ is the address of the proxy gateway and
Gold is the name of the virtual service.
When the proxy gateway processes a client request, the virtual service name that is
used to look up the endpoints must match the virtual service name in the client
request. If you create a proxy gateway module with the default type of routing,
which is URL-based, and use the URL available in the resolved WSDL, then the
routing of the request occurs automatically. If you create a proxy gateway module
with XPath-based routing, ensure that the message location you specify contains
the correct virtual service name.
http://ddd/basicservice
If you want to create a proxy gateway, set the following Gateway Endpoint
Lookup properties:
v Proxy Group Name: One or more proxy groups that are created in the built-in
configuration store, when the module is installed. BothWebSphere ESB or IBM
Business Process Manager have a built-in configuration store.
v Lookup Method: whether the virtual service name is the same as the input
URL, or found using an XPath expression.
v Optional: Lookup XPath: If you specify a Lookup Method of XPath, you must
specify where the virtual service name can be found in the SMO.
In order for a service gateway to use action-based routing, a Web service request
must contain an action either in the SOAPAction field or the WS-Addressing
Action field. If a Web service request contains both action fields, the values of the
fields must be the same.
WSRR Version 7.0 is required for action-based routing. WSRR Version 7.0 provides
an action field that is based on each input and output defined in your WSDL.
Using the input message, the Gateway Endpoint Lookup mediation primitive can
query WSRR for all endpoints with the same action value. If necessary, you can
specify classifications and user-defined properties that limit the search criteria.
If you want to create a service gateway that uses action-based routing, set the
following Gateway Endpoint Lookup properties:
v Lookup Method: set the Lookup Method to Action.
v Registry Name: Either the name of a particular instance of WSRR, or WebSphere
ESB or IBM Business Process Manager, or the default instance.
v Optional: User Properties
Usage
Service gateways act as proxies to services and allow you to access multiple
services from one address. In addition, service gateways can encapsulate
transformations, routing, and common processing. Both the Gateway Endpoint
Lookup primitive and the Endpoint Lookup primitive can be used to create service
gateways.
If you want to administer gateway services using the built-in configuration store,
use the Gateway Endpoint Lookup primitive to create a proxy gateway. You can
then use the Proxy Gateway widget to configure your proxy groups.
Note: When creating a proxy gateway, you must add only one Gateway Endpoint
Lookup primitive to your mediation flow.
If you want to route requests based on the action specified by a message, use the
Gateway Endpoint Lookup primitive to create a service gateway with a lookup
method of action.
Specifies how the endpoint is identified, either in the built-in configuration store or
in WSRR.
Action 1
Query WSRR, for all endpoints with
the action value specified by the
message.
XPath 2
Query the built-in configuration
store, using a virtual service name
located using an XPath expression.
A proxy gateway can use a Lookup
Method of URL or XPath.
Note:
Default URL
If you want to create a proxy gateway, specify the name of one or more proxy
groups.
When you install the proxy group module on WebSphere ESB or IBM Business
Process Manager, the proxy groups are created in the built-in configuration store. If
you do not specify a proxy group name, a default proxy group is created when
you install your module. The default proxy group is called modulenameProxyGroup,
where modulename is the name of your module.
Note: You should specify a proxy group name; otherwise, when you uninstall your
proxy gateway module, the proxy group is deleted, assuming that no other proxy
gateway modules reference the proxy group.
If you select a Lookup Method of XPath, specify the XPath expression that locates
the virtual service name in the SMO.
If you select a Lookup Method of Action, specify the WSRR definition to be used
by the Gateway Endpoint Lookup mediation primitive. A WSRR definition is
Note: To extract endpoint information about Web services then, in WSRR, you
must specify any classifications on the appropriate WSDL Port objects.
WSRR classifies objects using the ontology classification system (OWL), in which
each classifier is a class and has a URI. OWL implements a simple hierarchical
system. For example, a bank account could start with the following details:
v Account
– Identifier
– Name
- First name
- Second name
– Address
- First line of address
- Second line of address
Search the registry for services that are annotated with user-defined properties.
Note: To extract endpoint information about Web services, in WSRR you must
specify any user properties on the appropriate WSDL Port objects.
Name name
The name of the user-defined
property. The valid type is String.
Type type
The type of the user-defined
property. If the type is String, then
what you specify as the Value is
used as a literal, in the search
query. If the type is XPath, then
what you specify as theValue must
be an XPath expression. The XPath
expression must resolve to a unique
leaf node in the inbound SMO. The
value of the leaf node is used in the
search query.
Value value
The value of the user-defined
property. This can be either a literal
value or an XPath expression,
depending upon the Type property.
Considerations:
v If the Use dynamic endpoint if set in the message header property is not set in
the callout node, the run time does not use the dynamic endpoint in
/headers/SMOHeader/Target/address. In this case, the run time uses the default
endpoint if there is one, or throws an error.
v If the Lookup XPath expression resolves to more than one element in the SMO,
a runtime exception occurs.
v All Classification or User Properties specified for a Gateway Endpoint Lookup
mediation primitive result in a query that combines all of these properties using
a logical AND.
v If you want to use Classification URIs that include white space characters, the
correct URI encoding should be used. For example, a single character space
should be represented as %20.
v White space characters provided in any of the Gateway Endpoint Lookup
mediation primitive properties are treated as literal characters. They are not
removed by the Gateway Endpoint Lookup mediation primitive when querying
the registry. For example, if you specify a Classification property and the
expected results are not returned from a query, ensure there is no white space
before or after the Classification URI.
Introduction
You can use the HTTP Header Setter mediation primitive to provide a mechanism
for managing HTTP headers in the message. You can change, copy, add, or delete
HTTP headers by setting the mediation primitive properties.
If you want multiple header changes you can set multiple actions. Multiple actions
are acted on sequentially, in the order in which they are specified; this means that
header changes can build on each other.
You can create new HTTP headers by specifying header field values. You can
search for HTTP headers that already exist in the SMO by specifying the header
name to match on. If a matching header is found, it can then be deleted from the
message, copied to another location in the SMO, or have the value set. If a
matching header is not found, a new header can be created using a specified
header field value.
The HTTP Header Setter mediation primitive has one input terminal (in), one
output terminal (out) and a fail terminal (fail). The in terminal is wired to accept a
message and the other terminals are wired to propagate a message. If the
mediation is successful, the out terminal propagates the modified message. If an
exception occurs during the transformation, the fail terminal propagates the
original message, together with any exception information contained in the failInfo
element.
The HTTP Header Setter mediation primitive uses the given header name to
determine where to look in the SMO. There are three different types of HTTP
Header, distinguished by name and their location in the SMO.
v HTTP Control Headers, which have their own schema in the SMO, located at
/headers/HTTPHeader/control. The names of these headers are relative XPaths.
v HTTP Spec Headers, which are listed in the SMO at /headers/HTTPHeader/
header. The names of these headers are from the HTTP 1.1 spec
v Any other headers are User Headers and are listed at /headers/properties.
Usage
You can use the HTTP Header Setter mediation primitive to ensure that when a
HTTP message is sent to another system, via the HTTP binding, the headers that
are sent with the message are correctly set.
Because the operations you define occur sequentially, a later operation can depend
on an earlier operation. For example, you could create a new header, copy it to
elsewhere in the SMO and then delete it from the list of headers it was initially
appended to.
Mode mode
v If you want to create a new HTTP header, the Mode property must be set to
'Create'.
v If you want to search for a HTTP header and then modify the value of any
header that is found, or create a new header if none are found, the Mode
property must be set to 'Modify'.
v If you want to search for a HTTP header and then copy the first found header
to another location in the SMO, the Mode property must be set to 'Copy'.
v If you want to search for a HTTP header and then delete any headers that are
found, the Mode property must be set to 'Delete'.
Value value
If the Mode property is set to 'Create' or 'Modify', the Value property should be
set to a HTTP header literal value or an XPath expression that identifies a value to
copy into the HTTP header at runtime. When a new HTTP header is created or a
matching HTTP header is found, this new value is set in the specified field.
If the Mode property is set to 'Copy', the Value property should be an XPath 1.0
expression, identifying the target element to where the first found HTTP header
will be copied.
If the Mode property is set to 'Delete', the Value property should not be set.
Considerations:
v If the Mode property is "Modify" and a header cannot be found, a new header
will be created.
v If the XPath expression of the copy target resolves to more than one element in
the SMO, a runtime exception occurs.
v The location of the header within the SMO depends upon its type; control, spec
or user.
v If the Validate input property is true and the input message is invalid, a runtime
exception occurs.
Introduction
You can use the JMS Header Setter mediation primitive to provide a mechanism
for managing JMS headers in the message. You can change, copy, add, or delete
JMS headers by setting the mediation primitive properties.
If you want multiple header changes you can set multiple actions. Multiple actions
are acted on sequentially, in the order in which they are specified; this means that
header changes can build on each other.
You can create new JMS headers by specifying header field values. You can search
for JMS headers that already exist in the Service Message Object by specifying the
header name to match on. If a matching header is found, it can then be deleted
from the message, copied to another location in the SMO, or have the value set. If
a matching header is not found, a new header can be created using a specified
header field value.
The JMS Header Setter mediation primitive uses the given header name to
determine where to look in the SMO. There are two different types of JMS Header,
distinguished by name and their location in the SMO.
v Standard JMS Headers, which have their own schema in the SMO, located at
/headers/JMSHeader.
v Any other headers are User Headers and are listed at /headers/properties.
Usage
You can use the JMS Header Setter mediation primitive to ensure that when a JMS
message is sent to another system, via the JMS binding, the headers that are sent
with the message are correctly set.
Because the operations you define occur sequentially, a later operation can depend
on an earlier operation. For example, you could create a new header, copy it to
elsewhere in the SMO and then delete it from the list of headers it was initially
appended to.
You can also use the JMS Header Setter mediation primitive to help to filter
messages, using the Message Filter mediation primitive. You might want to find a
particular header and make it available to be used in the filtering.
Mode mode
v If you want to create a new JMS
header, the Mode property must
be set to 'Create'.
v If you want to search for a JMS
header and then modify the
value of any header that is
found, or create a new header if
none are found, the Mode
property must be set to 'Modify'.
v If you want to search for a JMS
header and then copy the first
found header to another location
in the SMO, the Mode property
must be set to 'Copy'.
v If you want to search for a JMS
header and then delete any
headers that are found, the Mode
property must be set to 'Delete'.
Type headerType
Specify the JMS header type. This is
only used for User Headers that are
listed at /headers/properties and is
a Java primitive type or String.
Value value
If the Mode property is set to
'Create' or 'Modify', the Value
property should be set to a JMS
header literal value or an XPath
expression that identifies a value to
copy into the JMS header at
runtime. When a new JMS header
is created or a matching JMS header
is found, this new value is set in
the specified field.
If the Mode property is set to
'Copy', the Value property should
114 IBM WebSphere ESB: Reference
be an XPath 1.0 expression,
identifying the target element to
where the first found JMS header
Validate input validateInput:
Considerations:
v If the Mode property is "Modify" and a header cannot be found, a new header
will be created.
v If the XPath expression of the copy target resolves to more than one element in
the SMO, a runtime exception occurs.
v The location of the header within the SMO depends upon its type; standard or
user.
v If the Validate input property is true and the input message is invalid, a runtime
exception occurs.
Introduction
You can use the Message Element Setter mediation primitive to provide a simple
mechanism for setting message content, it does not change the type of the
message. You can change, add or delete message elements by setting the mediation
primitive properties.
You can set target elements to a constant value, or to a value copied from
somewhere in the input service message object (SMO). If a target element you
specify does not exist, it is created in the SMO. You can also delete message
elements, if they are optional or repeating elements.
Target elements are specified as XPath expressions using the Target property. If
you set a target element to a constant value, the target XPath must resolve to a
single leaf in an SMO node. If you set a target element from a source element, both
the source and target XPaths must resolve to a single element (a single leaf or
subtree). You can use the Value property to specify either a constant value or a
source XPath expression.
The Message Element Setter mediation primitive has one input terminal (in), one
output terminal (out) and a fail terminal (fail). The in terminal is wired to accept a
message and the other terminals are wired to propagate a message. If the
mediation is successful, the out terminal propagates the modified message. If an
exception occurs during the transformation, the fail terminal propagates the
original message, together with any exception information contained in the
failInfo element.
Usage
Because the operations you define occur sequentially, a later operation can depend
on an earlier operation. For example, you could define a copy operation to copy a
new element into the SMO, and a later operation to set a value in the newly
copied element.
It is often useful to combine the Message Element Setter mediation primitive with
other mediation primitives. For example, you can use the Message Element Setter
mediation primitive if you need to manipulate data, before or after the Database
Lookup mediation primitive is invoked.
You can also use the Message Element Setter mediation primitive to change
messages after filtering them, using the Message Filter mediation primitive. You
might want to update or delete certain message elements after filtering.
Note: You must ensure that the source data type is compatible with the
target data type, or a runtime exception occurs.
v Using Append, you can copy from a source element to a new element
instance, by appending to a repeating element in the output. You can
only append to a repeating element in the output.
Note: You must ensure that the source data type is compatible with the
target data type, or a runtime exception occurs.
v Using Delete, you can delete an element instance from the SMO.
Target target
An XPath 1.0 expression that
describes the location of the
element to set, create or delete. You
can specify: /, /headers, /context
or /body. / refers to the complete
SMO, /headers refers to the headers
of the SMO, /context refers to the
context of the SMO and /body
refers to the body of the SMO.
If you want to set a target element
to a constant value, the target
XPath expression must resolve to a
single leaf element. If you want to
copy from a source element to a
target element, both the source and
target must specify an XPath
expression that resolves to a single
message element (a single leaf or
subtree).
If you set multiple targets, the
elements are set sequentially.
Therefore, if you set element X
from value 13 to value 14, and then
set element Y to the value of
element X, the mediation sets
element X to value 14 and element
Y to value 14.
If you specify the same target
element more than once, the last
operation performed on the target
element takes precedence.
Type type
The type of the element value.
If you want to set the Target to a
constant value, the Type must be a
simple XML schema type, or an
XML schema type that extends a
simple XML schema type.
If you want to set the Target to a
value copied from somewhere in
the input SMO, the Type must be
the keyword copy. When you copy
a value from somewhere in the
input SMO, the target type is
assumed to be the same as the
source type. A copy operation
always takes its source value from
the unmodified input SMO (that is,
the SMO instance received by the
primitive on its in terminal). If the
same target element is specified
more than once in a particular
primitive, the last operation
performed on that target element
118 IBM WebSphere ESB: Referencewins.
If you want to copy a value from
the input SMO to a new element
instance, appended to a repeating
Validate input validateInput:
Considerations: Consider the following when using the Message Element Setter
mediation primitive:
v If the XPath expression of the source resolves to more than one element in the
SMO, a runtime exception occurs.
v If you attempt to set a target element to a value of incompatible type, a runtime
exception occurs.
v If the Validate input property is true and the input message is invalid, a
runtime exception occurs.
Introduction
You can use the Message Filter mediation primitive with an XPath expressions to
direct messages, which meet certain criteria, down different paths of a flow.
The Message Filter mediation primitive has one input terminal (in), one fail
terminal (fail), and multiple output terminals, (one of which is the default
terminal). The in terminal is wired to accept a message and the other terminals are
wired to propagate a message.
Each of the output terminals, apart from the default terminal, is associated with a
simple, conditional expression. The contents of the input message are compared
with each expression in turn and, if the condition is met, the message is
propagated to the associated output terminal. The primitive can be configured
If an exception occurs during the filtering, the fail terminal propagates the original
message, together with any exception information.
Usage
You can use the Message Filter mediation primitive to check that the inbound
message meets some criterion. For example, that a required field is set. If the
criterion is not met you can raise a fault using the Fail mediation primitive, or
send an error response.
The Message Filter mediation primitive lets different messages take different paths.
For example, a message might need forwarding to different service providers
based on the request details.
You can use the Message Filter mediation primitive to bypass unnecessary steps.
You can test if certain data is in a message, and only perform a Database Lookup
operation if the data is missing.
When used in combination with a Database Lookup primitive, the Message Filter
can direct messages based on the contents of an independently administered
lookup table. For example, you could route a message based on customer status
even if the inbound message contained only the customer identifier.
Enabled enabled:
Required Yes
Valid values Boolean
Note:
Default true
Required Yes
Valid values
First 0
If the Distribution mode is set to
First, the message is propagated to
the first matching output terminal.
All 1
If the Distribution mode is set to
All, the message is propagated to all
matching output terminals.
Note:
Filters filters:
A list of expressions, and associated terminal names, that define the filtering
performed by the mediation primitive.
Pattern pattern
An XPath 1.0 expression against
which the message is tested. The
expression is evaluated starting
from the XPath expression /, which
refers to the complete SMO.
Introduction
You can use the Message Logger mediation primitive to store messages in a
relational database, or in other storage mediums if you use the custom logging
functionality. The Message Logger mediation primitive logs messages to a
relational database using an IBM-defined database schema (table structure). It can
write to other storage mediums, such as flat files, through the use of the custom
logging facility.
The Message Logger mediation primitive logs an XML transcoded copy of the
service message object (SMO). The default behavior is to log just the message
payload but the mediation primitive can be configured to log the complete SMO,
or a part of the SMO defined by an XPath expression. Along with the message
contents the mediation primitive also logs a timestamp, the message identifier, the
primitive instance name, the mediation module instance name and the SMO
version number.
If you are using a relational database, the message that is logged is stored in a
database column called: Message. The other data that is logged is stored in columns
with an appropriate heading, as documented later in this topic.
The Message Logger mediation primitive has one input terminal (in), one output
terminal (out) and one fail terminal (fail). The in terminal is wired to accept a
message and the other terminals are wired to propagate a message. The input
message triggers logging to a database, or though a custom logger; and if the
logging is successful, the out terminal propagates the original message. If an
exception occurs during the processing of the input message, the fail terminal
propagates the original message, together with any exception information.
Details
In summary, with version 6.1.2, and later, the default is for the Message Logger
mediation primitive to use the CommonDB database and for the run time to map
the data source at jdbc/mediation/messageLog to the CommonDB database.
Before version 6.1.0, the Message Logger mediation primitive did not use the
CommonDB database. Before version 6.1.0:
v On distributed platforms, the default installation of the runtime product created
a stand-alone application server, and a local Derby database and datasource. The
local Derby database was called EsbLogMedDB. The Message Logger mediation
primitive was configured to use this Derby database, by default.
v On z/OS®, the installation of the runtime product created an application server,
and a sample database and datasource. The Message Logger mediation primitive
could be configured to use either a Derby or a DB2® database.
If you used the Message Logger mediation primitive before version 6.1.0, and
move to version 6.1.x or above, any messages stored at the previous location
remain at that location. If you want to maintain a single location for Message
Logger messages you can take one of the following actions:
v Manually move old data into the CommonDB database.
v Continue using the previous database. If you want to use the previous location
you must manually configure the required data source.
If your run time has a mixed cell, where some nodes are version 6.1.0 or above
and some nodes are below version 6.1.0, the nodes below version 6.1.0 behave in
one of the following ways:
v Continue storing message information in the database identified by their
jdbc/mediation/messageLog data source.
v Start storing message information in the database identified by their new
jdbc/mediation/messageLog data source.
The action taken by the pre-6.1.0 nodes depends on whether the nodes are
configured to reject, or accept, JNDI changes during the federation process. For
further information on the federation of nodes, see the runtime documentation.
You can use the Message Logger mediation primitive to store messages that you
process later. The logged messages can be used for various purposes. For example,
you could use the logged messages for data mining, message replay or for
auditing.
By default, the default Handler implementation class logs every message to a file
stored in the system temporary directory as defined by the java.io.tmpdir system
property, this is typically /tmp or /var/tmp on a Unix system and C:\Documents
and Settings\<user>\Local Settings\Temp on a windows system. The file will be
called MessageLog.log.
With the default value for the Literal property the call to
MessageFormat.format(<LogRecord>.getMessage(), <LogRecord>.getParameters())
in the default Formatter implementation class means the following:
v {0} would then be substituted with the Time Stamp value -
logMessageParameters[0]
v {1} would then be substituted with the Message ID value -
logMessageParameters[1]
v {2} would then be substituted with the Mediation Name value -
logMessageParameters[2]
Entries for each message would look similar to the following example:
29/04/08 15:11,9A85B1D2-0119-4000-E000-13E4091443BC,MessageLogger1,CustomLogging,abc,6
If you are using the custom logging, you need to implement the Handler,
Formatter and Filter classes to customize the behavior of the logger. For further
information on implementing these classes, see: http://java.sun.com/j2se/1.4.2/
docs/api/java/util/logging/package-summary.html. The default implementation
class names are as follows:
v Handler property :
com.ibm.ws.sibx.mediation.primitives.logger.WESBFileHandler
v Formatter property :
com.ibm.ws.sibx.mediation.primitives.logger.WESBFormatter
v Filter property : com.ibm.ws.sibx.mediation.primitives.logger.WESBFilter
By default, the default Handler implementation class logs every message to a file
stored in the system temporary directory as defined by the java.io.tmpdir system
property, this is typically /tmp or /var/tmp on a Unix system and C:\Documents
and Settings\<user>\Local Settings\Temp on a windows system. The file will be
called MessageLog.log.
Entries for each message would look similar to the following example:
29/04/08 15:11,9A85B1D2-0119-4000-E000-13E4091443BC,MessageLogger1,CustomLogging,abc,6
If you are using the custom logging, you need to implement the Handler,
Formatter and Filter classes to customize the behavior of the logger. For further
information on implementing these classes, see: http://java.sun.com/j2se/1.4.2/
docs/api/java/util/logging/package-summary.html. The default implementation
class names are as follows:
v Handler property :
com.ibm.ws.sibx.mediation.primitives.logger.WESBFileHandler
v Formatter property :
com.ibm.ws.sibx.mediation.primitives.logger.WESBFormatter
v Filter property : com.ibm.ws.sibx.mediation.primitives.logger.WESBFilter
Enabled enabled:
Root root:
New 1
If you specify New, the message is
logged in its own local transaction.
In this case, if a failure occurs in the
flow, the message logging is not
rolled back.
Note:
Identifies whether to log the message to a Database or using the custom logging
functionality.
Custom 1
Note:
Default Database
The JNDI name of the datasource that defines where the data will be logged.
Handler handler:
You can provide a Handler implementation class to customize the behavior of the
custom logger. You can log through more than one Handler implementation class if
you want. Optionally, you can provide Formatter implementation classes, Filter
implementation classes, or both. For more information, see: http://java.sun.com/
j2se/1.4.2/docs/api/java/util/logging/Handler.html.
Formatter formatter:
Filter filter:
You can provide a Filter implementation class with a Handler implementation class
to customize the behavior of the custom logger. For more information, see:
http://java.sun.com/j2se/1.4.2/docs/api/java/util/logging/Filter.html.
Literal literal:
Identifies the exact content of what is logged by the custom logging functionality.
This is used in conjunction with the Formatter value.
Level level:
Identifies the level at which the message is logged by the custom logging
functionality. For more information, see: http://java.sun.com/j2se/1.4.2/docs/api/
java/util/logging/Level.html.
Warning 1
Info 2
Config 3
Fine 4
Finer 5
Finest 6
Note:
Default Info
Considerations:
v If the Data source name is not valid, or the data source cannot be obtained from
JNDI, a runtime exception occurs.
v If the database cannot be found, or the database returns an error, a runtime
exception occurs.
v If you want to create your own database resources, you can use the
createMessageLoggerResource.jacl script. At run time, the
createMessageLoggerResource.jacl script is stored at: install_root/bin/
createMessageLoggerResource.jacl.
v If you want to create your own database resources, the runtime product
provides data definition language (ddl) files that describe the table schema. The
Table.ddl files are stored at: install_root/util/EsbLoggerMediation/database_
type/Table.ddl. Where database_ type refers to the type of database. If you create
your own database and want to use the default JNDI name for your data source,
you must remove the default data source.
v If you want to create the table ESBLOG.MSGLOG on an Oracle database, the
ESBLOG user must exist. In order to add the ESBLOG user you must have
SYSDBA privileges. When you install and configure your runtime product, you
can define the Common database as being an Oracle database and the
installation process can create the table ESBLOG.MSGLOG in the CommonDB
database. The ESBLOG user must exist before you try to create the table
ESBLOG.MSGLOG. Alternatively, you might decide to create the table ESBLOG.MSGLOG
in your own database, using the createMessageLoggerResource.jacl script or a
Table.ddl file. Again, the ESBLOG user must already exist before you try to
create the table ESBLOG.MSGLOG.
v If the XPath expression is not found in the input message, an entry is still
logged in the database but the Message column is empty.
v If more than one Message Logger mediation primitive is used in a particular
flow, the display name must be unique.
v There is no mediation primitive specifically designed to access data logged to a
database. However, you can access the database using the Database Lookup
mediation primitive, a Custom Mediation primitive, a Java component or an
external application. If you write a custom SCA component you can access the
Consider the following when using the Message Logger mediation primitive with
custom logger:
v In any Formatter implementation class, a call to <LogRecord>.getParameters(),
will always return an Object array containing six elements. The content of the
array is as follows:
Table 16. Formatter implementation class array content
Element Object Type Item Description
0 String Time Stamp The UTC timestamp, indicating when the message was
logged to the database.
1 String Message ID The message ID, from the SMO.
2 String Mediation Name The name of the mediation primitive instance that logged
the message.
3 String Module Name The name of the mediation module instance that contains
the Message Logger primitive.
4 Data Object Message The SMO, or part of the SMO.
5 String version The version of the logged SMO.
You should decide how much of this information you want to log using a
combination of the Literal property value and the Formatter implementation.
v In the Filter implementation class, there is scope to perform some complex
filtering on which messages get logged, by performing a
<LogRecord>.getParameters() call and testing the returned results.
Context
When you install and configure IBM Business Process Manager, or WebSphere
Enterprise Service Bus, you can create database resources that are suitable for the
Message Logger mediation primitive.
After you have installed your runtime product, you can use the
createMessageLoggerResource.jacl script to create database resources for the
Message Logger primitive. The createMessageLoggerResource.jacl script is in the
bin directory, under the product installation directory.
Requirements
Purpose
The following examples show a few of the script flags: many of the flags are
optional and have default values. For more information on the optional flags, and
their default values, refer to the createMessageLoggerResource.jacl script.
On Windows, the following example creates a table called MSGLOG under a schema
qualifier of ESBLOG. The table is created in database EsbLogMedDB, which is of type
DERBY_EMBEDDED:
wsadmin.bat -f createMessageLoggerResource.jacl -createTable
Introduction
You can use the Message Validator mediation primitive to validate messages.
The Message Validator mediation primitive has one input terminal (in), one output
terminal (out), and one fail terminal (fail). The in terminal is wired to accept a
message and the other terminals are wired to propagate a message. At run time, if
no validation exception occurs during the processing of the input message, the out
terminal propagates the validated message. If a validation exception occurs, the
fail terminal is fired, and stores the exception information in the FailInfo element
of the service message object (SMO).
Details
After you create your mediation flow, and configure your mediation primitives,
you can insert the Message Validator mediation primitive into the flow to validate
some, or all, parts of the message.
The Message Validator mediation primitive validates the message and also
validates any weakly-typed message fields that have been set to strongly-typed
message fields earlier in the mediation flow. You can set weakly-typed message
fields to strongly-typed message fields using the Set Message Type primitive or the
Type Filter primitive. Alternatively, you can use the input node to set the
weakly-typed fields in the correlation, transient, or shared context.
Usage
You can use the Message Validator mediation primitive whenever you want to
validate incoming messages. The validation includes checking that any
weakly-typed message fields that have been set to strongly-typed message fields,
are of the specified strong type.
Enabled enabled:
Root root:
Considerations:
v The Enable and Root properties are both promotable properties of the Message
Validator mediation primitive and can be set at run time.
You can use the MQ Header Setter mediation primitive to provide a mechanism
for managing MQ headers in a message. You can change, copy, add, or delete MQ
headers by setting the mediation primitive properties.
If you want multiple header changes you can set multiple actions. Multiple actions
are acted on sequentially, in the order you specify; this means that header changes
can build on each other.
The MQ Header Setter mediation primitive has one input terminal (in), one output
terminal (out) and a fail terminal (fail). The in terminal is wired to accept a
message and the other terminals are wired to propagate a message. If the
mediation is successful, the out terminal propagates the modified message. If an
exception occurs during the transformation, the fail terminal propagates the
original message, together with any exception information contained in the failInfo
element.
Details
You can create a new MQ header (apart from the MQMD header) and specify the
header field values. The new MQ header element is added to the service message
object (SMO); if a header of the same type already exists the new header is
appended to the end of the header list.
You can also search for MQ headers that already exist in the SMO, by specifying
the header type to match on. If matching headers are found, you can set the
header fields to the values you specify, or you can delete the headers (other than
the MQMD header). Alternatively, you can copy the first matching MQ header to
another location in the SMO.
Generally, you specify the field values of MQ headers, using either a literal value
or an XPath expression.
Usage
You can use the MQ Header Setter mediation primitive to ensure that when an
MQ message is sent to another system, the headers that are sent with the message
are correctly set.
Because the operations you define occur sequentially, a later operation can depend
on an earlier operation. For example, you could create a new header, copy it to
elsewhere in the SMO and then delete it from the list of headers it was initially
appended to.
You can also use the MQ Header Setter mediation primitive to help to filter
messages, using the Message Filter mediation primitive. You might want to find a
particular header and make it available to be used in the filtering. For example,
you could copy an MQ header to a more accessible place, and the Message Filter
primitive could then use the details inside the header.
A table of actions that you want to perform on MQ header elements, in the SMO.
You can add to this table by clicking Add (follow any instructions to add a
dependency, from the module to the MQ schemas). Then follow the instructions of
the wizard.
Mode mode
In IBM Integration Designer, the
available values for this property
are shown within a field titled
Header Action. Use this property to
specify the action that you want to
perform on MQ headers.
v If you want to create a new MQ
header, set the property to
Create. This is the default action.
v If you want to search for MQ
headers and then either set the
values in any headers that are
found or create a new header if
no headers are found, set the
property to Find and Set.
v If you want to search for MQ
headers and then copy the first
header that is found to another
location in the SMO, set the
property to Find and Copy.
v If you want to search for MQ
headers and then delete any
headers that are found, set the
property to Find and Delete.
Values values
In IBM Integration Designer, you
can specify a value for this property
by using the Set Values table. If the
Mode property (or Header Action) is
set to Create or Find and Set, you
can set the Values property.
The Values property is a list of MQ
header field names and their
values. When a new MQ header is
created, or MQ headers are found,
the new values are set in the
specified fields. Each value can be
either a literal value or an XPath
expression that is resolved at run
time to provide the value. The
value provided must be compatible
with the field where it is to be set.
For example, if the field is of type
int, the value could validly be 14,
but not GoldAccount.
The MQRFH2 header is structurally
complex; therefore, you might need
to refine the schema. You can do
this by adding elements to the
MQRFH2Group[] element.
Target Destination
targetDestination
If the Mode property (or Header
Action) is set to Find and Copy, the Chapter 3. Mediation primitives 137
Values property should be an
XPath 1.0 expression that identifies
the target destination. The first MQ
Validate input validateInput:
Considerations:
v If the Header Action is Find and Set and a header cannot be found, a new
header is created.
v If you attempt to set a header field to a value of incompatible type, a runtime
exception occurs.
v If the Target Destination resolves to more than one element in the SMO, a
runtime exception occurs.
v If the Validate input property is true and the input message is invalid, a
runtime exception occurs.
Introduction
You can use the Policy Resolution mediation primitive to retrieve mediation
policies from a WSRR registry, and control mediation primitives that come later in
the flow. The registry can be local or remote.
The Policy Resolution primitive lets you retrieve mediation policies associated with
the current Service Component Architecture (SCA) module. You can also retrieve
mediation policies associated with a target service used by the mediation flow. If
you want to retrieve mediation policies associated with a target service, add an
Endpoint Lookup primitive to the mediation flow before the Policy Resolution
primitive. The Endpoint Lookup primitive selects the target service and the Policy
Resolution primitive retrieves mediation policies attached to the target service.
The Policy Resolution mediation primitive has one input terminal (in), two output
terminals (out and policyError), and a fail terminal (fail). The in terminal is wired
to accept a message and the other terminals are wired to propagate a message. If
an exception occurs during the processing of the input message, the fail terminal
propagates the input message, together with any exception information. The fail
terminal is fired if the run time cannot find an instance of WSRR. If no problems
occur, the out terminal propagates the original service message object (SMO)
modified by any property overrides that the mediation policies have provided.
Note: There might not be any property overrides. The out terminal is fired if there
are no mediation policies that apply.
If there is an error while mediation policies are being processed, the policyError
terminal is fired. If there is an error while mediation policies are being processed,
mediation policies might still be used to override dynamic properties. The outcome
depends on the mediation policy processing model. If property overrides are used,
the SMO is modified, otherwise the terminal propagates an unmodified message.
Details
If valid mediation policies are found in the registry, their contents can be used to
override the dynamic properties of mediation primitives that come after the Policy
Resolution primitive. If a mediation flow contains the Policy Resolution primitive,
any promoted property that is in the top-level request, response, or fault flow is a
dynamic property. Mediation policies contain the equivalent of promoted property
groups, names, and values, and must conform to the Web Services Policy
Framework (WS-Policy).
Note: The run time supports mediation policies attached to service, port,
binding, portType , and operation objects defined in WSDL documents.
However, the run time does not support mediation policies attached to message
objects. For SCA modules, the run time supports mediation policies attached to
exports, the interface portType, and operations that are linked to the export. The
run time also supports mediation policies attached to the WSRR objects “Manual
HTTP Endpoint with associated Interface”, “Manual JMS Endpoint with
associated Interface”and “Manual MQ Endpoint with associated Interface”, and
the interface portType and operations that are linked to these Manual Endpoints.
v If you set the Policy Scope to Intersection, then the Policy Resolution
primitive retrieves mediation policies that are attached to both the module and
the target service, in WSRR. The Policy Resolution primitive combines both
scopes into a single mediation policy that meets the requirements of both. If the
intersection processing cannot find a policy that meets both requirements, the
policyError terminal is fired.
SMO details
If there are mediation policies attached to either the current module or to the target
service, they are analyzed according to the mediation policy processing model. The
resulting property information is copied to the SMO at location
/context/dynamicProperty. For each property, the /context/dynamicProperty
location stores the group, name, and value. The property group and name are
compared to dynamic properties in later mediation primitives. When a match
occurs, the value of the dynamic property is overridden. For example, suppose you
create a module that contains one mediation flow component, and the component
contains two mediation primitives: a Policy Resolution primitive followed by an
XSL Transformation primitive. If you promote the XSL Transformation property,
Mapping file, you can override the value at run time, using mediation policies in
your registry.
Any dynamic properties not overridden by mediation policies take the values
shown on the administrative console. However, the administrative console values
are not stored in the /context/dynamicProperty location.
If you want to retrieve mediation policies for a target service, add an Endpoint
Lookup primitive to the mediation flow before the Policy Resolution primitive.
If you are using mediation policies associated with target services, the run time
needs to match the target service information in a particular message with the
target service information in WSRR. For example, in WSRR, suppose you associate
a mediation policy with the operation getAddress. At run time, the operation is
taken from the SMO location: /headers/SMOHeader/Operation. Therefore, you must
ensure that the Operation field contains the correct operation. This might involve
adding one or more mediation primitives that determine, and set, the content of
the Operation field. You could use the Message Element Setter primitive to set the
operation.
Note: If you already use the Operation field for other purposes, you must save the
operation value and replace it after the Policy Resolution primitive has done its
work.
Optionally, you can specify conditions that you want a mediation policy to fulfill.
These conditions are sometimes referred to as gate conditions. To specify gate
conditions, take the following actions:
v In WebSphere Integration Developer, set an XPath expression in the XPath
property. The XPath expression is used by the run time to find the value of the
condition. You can specify more than one XPath expression.
v For each XPath expression, provide a condition name using the Policy
condition name property.
v In WSRR, create a gate condition using the Policy condition name property. For
example, if you create a Policy condition name called InsuranceType, you could
create a gate condition called medGate_Condition1 with a value of InsuranceType
= Gold. For more information on creating mediation policies and gate conditions,
see the tutorials referred to at the end of this topic.
At run time, the XPath expression is used to find the condition value in the
message, and the message value is compared to the gate condition value. If the
gate condition resolves to true, the relevant mediation policy can be applied.
Before you use the Policy Resolution mediation primitive you might need to add
SCA modules, WSDL documents, mediation policies, and mediation policy
attachment documents to your WSRR registry. You can create mediation policies
using WSRR, directly. Alternatively, you can create mediation policies using
Business Space widgets, and the widgets create the mediation policies in WSRR.
Both WebSphere Enterprise Service Bus (WebSphere ESB), Version 7.0, and
WebSphere Process Server, Version 7.0, include Business Space widgets for creating
mediation policies.
To retrieve mediation policies for the current module, the details of your SCA
module must exist in the appropriate registry. When you load an SCA module into
WSRR, the registry creates an SCA module document. The registry also creates an
SCA module object to which you can attach mediation policies.
To retrieve mediation policies for a target service, the WSDL, SCA export or
Manual Endpoint with associated Interface for your target service must exist in the
appropriate registry. When you load a WSDL document into WSRR, the registry
creates objects for any service, port, binding, portType, operation, and message
elements described by the WSDL. WebSphere ESB and WebSphere Process Server
support mediation policies attached to all WSDL object types except message
objects. When you load a SCA module into WSRR the registry creates objects for
the module and any exports and imports defined in the module. WebSphere ESB
and WebSphere Process Server support mediation policies attached to exports and
the interface portType and operations that are linked to the export. WebSphere ESB
and WebSphere Process Server also support mediation policies attached to the
WSRR objects "“Manual HTTP Endpoint with associated Interface”", “Manual JMS
Endpoint with associated Interface” and "“Manual MQ Endpoint with associated
Interface”, and the interface portType and operations that are linked to these
Manual Endpoints. When planning your mediation policies you should consider
the following points:
v Because mediation policies override module properties, the policy creation
process needs module information even if the policy is going to be attached to a
target service. If you load your SCA modules onto a different WSRR instance to
your WSDL documents, you might need to copy a suitable policy from one
WSRR instance to the other.
v The Web Services Policy Framework specifies that policies can exist at different
levels of the target service. The WS-Policy Framework calls these levels the policy
subject. The levels are: service, endpoint, operation, and message. The endpoint
level contains port, binding, and portType. In the interests of simplicity, you
Note: The Business Space Mediation Policy Administration widget does not
support attaching mediation policies to SCA exports, Manual Endpoints or the
linked interface portType or operations. Policies can be attached to these WSRR
objects using the WSRR Console.
When you have the mediation policies you need, you can attach them to your SCA
module or your target service. If you want to specify gate conditions on a
mediation policy, you must specify them on the policy attachment in WSRR.
If you want to use WSRR governance you must make your WSRR policy
document governed. Then you can move the policy document, and any associated
policies, through the life cycle classifications. If you want to use classifications that
are not related to governance, you must add classifications to the WSRR policy, not
the policy document. In WSRR, there is a default governance life cycle, but you
can define your own. If you want to filter mediation policies according to
particular WSRR classifications, including life cycle classifications, you must also
define the classifications on the Policy Resolution primitive.
Usage
Using mediation policies, you can develop new service interactions that achieve
greater levels of flexibility and administrative control. In addition, you can get new
value out of existing systems by adjusting message flows according to the context
in which they occur.
When you design your mediation flow, any mediation primitive that occurs after
the Policy Resolution primitive can have its dynamic properties overridden, using
values from mediation policies. However, you must specify a valid default value
for every property you want to override. Generally, you put a Policy Resolution
primitive at the start of the flow, except when you need other mediation
primitives, typically the Endpoint Lookup primitive.
You can use mediation policies in many ways. The following are just some of the
ways in which you can use mediation policies:
v Use mediation policies to activate or deactivate properties. For example, you
could turn off message filtering by unsetting the Enabled property of the
Message Filter primitive.
v Use conditional mediation policies, which apply when particular conditions
exist. For example, you could apply different message transformations
depending on different customer types: one transformation for gold customers
and another transformation for silver customers. The mediation policies could
contain a different value for the XSL Transformation Mapping File property,
depending on the customer type.
Target Service 1
If you set the Policy Scope to
Target Service, the Policy
Resolution primitive retrieves
mediation policies that are attached
to WSRR objects representing the
target service. When you load a
WSDL document into WSRR, the
registry creates objects for any
service, port, binding, portType,
operation, and message elements
described by the WSDL. You can
attach mediation policies to any of
these objects except message
objects.When you load an SCA
module into WSRR the registry
creates objects for the module and
any exports and imports defined in
the module. You can attach
mediation policies to exports, the
interface portType and operations
that are linked to the export. You
can also attach mediation policies to
the WSRR objects "Manual HTTP
Endpoint with associated Interface",
"Manual JMS Endpoint with
associated Interface" and "Manual
MQ Endpoint with associated
Interface", and the interface
portType and operations that are
linked to these Manual Endpoints.
Note: Use an Endpoint Lookup
mediation primitive to determine
the exact target service, before
retrieving mediation policies
associated with the target service.
2
If you set the Policy Scope to
Intersection, the Policy Resolution
primitive retrieves mediation
policies that are attached to both the
module and the target service, in
WSRR; combining both scopes into
a single mediation policy that meets
the requirements
Chapter of both.
3. Mediation primitives 145
Note:
Default Module
Conditions conditions:
XPath xpath
The XPath location from which the
run time gets the mediation policy
condition. For example, if an XPath
is set to /body/input/customerName
and the associated Policy
condition name is Customer, the run
time sets the value of Customer to
whatever it finds at
/body/input/customerName.
Comment comment
Any comments that you want to be
saved with the mediation flow
component.
Defines whether the mediation policy selected on the request flow is propagated to
the response flow. By default the mediation policy is not propagated to the
response flow. You can propagate the mediation policy to the response flow by
selecting the check box.
At run time, any Classification you specify in IBM Integration Designer must be
found on a suitable mediation policy in WSRR. WSRR has many classification
systems, including life cycle classifications that can be used for governance.
Note:
v If you want to use WSRR governance you must make the appropriate WSRR
policy documents governed. Then you can move the policy documents, and any
associated policies, through the life cycle classifications. However, if you want to
use classifications that are not related to governance, you must specify the
classifications on the WSRR policies that you want to retrieve, not on the policy
documents.
v Mediation policies, in WSRR, have two classifications that are used for internal
processing: WESB Mediation Policy and WS Policy Framework 1.5. Do not edit or
delete these classifications, or move them to IBM Integration Designer.
If you specify one classification in the Policy Resolution primitive and two
classifications in WSRR, then the mediation policy can be returned, assuming the
names match. For example, if the Policy Resolution primitive specified a
classification of Test and the WSRR policy object specified classifications of Test
and Managed, then the mediation policy would match the query. However, if the
Policy Resolution primitive specified classifications of Test and Managed but the
WSRR policy object only specified a classification of Managed, then the mediation
policy would not match the query.
WSRR defines classification systems using the Web Ontology Language (OWL), in
which each classifier is a class and has a URI. OWL implements a simple
hierarchical system. For example, a bank account could start with the following
details:
v Account
– Identifier
– Name
- First name
- Second name
– Address
- First line of address
- Second line of address
The mediation policy processing model defines the result of processing any
combination of mediation policies. However, you can simplify the implementation
of mediation policies by following some basic rules and patterns.
Rules
Note: If you attach all default mediation policies only to your SCA module, all
configuation is driven by mediation policies and any changes made on the
administrative console are ignored.
v Do not allow conflicts between mediation policies with gate conditions.
If you want to provide a dynamic override for every dynamic property in your
module, when you administer WSRR you should attach all the default mediation
policies to your SCA module.
If you have mutually exclusive gate conditions, the run time will never try to
merge conditional mediation policies.
v Have a single default mediation policy with no gate conditions (the attachment
has no conditions). This default mediation policy contains all module properties
that can be overridden, and is used when none of the conditional mediation
policies apply.
v Create gate conditions so that each gate condition represents a distinct case;
therefore, gate conditions are mutually exclusive, and a maximum of one
conditional mediation policy can be chosen. For example, you could have one
mediation policy with a gate condition whose value is InsuranceType = "Gold",
and another mediation policy with a gate condition whose value is
InsuranceType = "Silver". For a particular message, the InsuranceType will be
either Silver or Gold, and the appropriate mediation policy will be chosen.
The following example shows three mediation policies attached to one module.
Equally, the example could show three mediation policies attached to one scope
point of a target service. Two mediation policies have a gate condition, and one
mediation policy has no gate conditions. The two mediation policies with a gate
condition are mutually exclusive.
At run time, the message content determines which mediation policies are used
(and therefore what module properties can be overridden):
v If the InsuranceType = "Gold", mediation policy P1 is used.
v If the InsuranceType = "Silver", mediation policy P2 is used and properties not
mentioned by P2 are taken from mediation policy P3.
v If the InsuranceType is neither Gold nor Silver, mediation policy P3 is used.
medGate_Con1
InsuranceType = "Gold"
P2 Property_A value=mmm
Module
medGate_Con2
Dynamic InsuranceType = "Silver"
Properties:
Property_A
Property_B
Property_C
Property_A value=xxx
Property_B value=yyy
P3 Property_C value=zzz
Mediation policy
Mediation policies
attachments
If you do not have mutually exclusive gate conditions, the run time might try to
merge conditional mediation policies. Any conditional mediation policies that
might be merged together, must have module properties that are unique.
v Have a single default mediation policy with no gate conditions (the attachment
has no conditions). This default mediation policy is used when none of the
conditional mediation policies apply.
v Create gate conditions so that more than one conditional mediation policy might
be used, but ensure that mediation policies that might be merged have unique
properties. For example, you could have one mediation policy with a gate
condition whose value is InsuranceType = "Gold", another mediation policy
with a gate condition whose value is InsuranceType = "Silver", and yet another
mediation policy with a gate condition whose value is CustomerType =
"Student". For a particular message, the InsuranceType will be either Silver or
Gold, and the appropriate mediation policy will be used. However, the
mediation policy associated with the gate condition CustomerType = "Student"
might need to be merged with the other conditional mediation policy; therefore,
it must contain unique module properties.
The following example shows four mediation policies attached to one module.
Equally, the example could show four mediation policies attached to one scope
At run time, the message content determines which mediation policies are used
(and therefore what module properties can be overridden):
v If the InsuranceType = "Gold", mediation policy P1 is used.
v If the InsuranceType = "Silver", mediation policy P2 is used.
v If the CustomerType = "Student", mediation policy P3 is used.
v If two conditional mediation policies are used, (either P1 and P3, or P2 and P3),
no property appears more than once.
– If P1 and P3 are used, Property_A and Property_B come from P1 and
Property_C comes from P3.
– If P2 and P3 are used, Property_A comes from P2, Property_C comes from P3,
and Property_B comes from P4.
v If no conditional mediation policies are used, mediation policy P4 is used.
medGate_Con1
InsuranceType = "Gold"
P2 Property_A value=mmm
medGate_Con2
Module InsuranceType = "Silver"
Dynamic
Properties: P3 Property_C value=ooo
Property_A
Property_B
Property_C
medGate_Con3
CustomerType = "Student"
Property_A value=xxx
Property_B value=yyy
P4 Property_C value=zzz
Mediation policy
Mediation policies
attachments
Introduction
The mediation policy model describes how the run time calculates the effective
mediation policy for a particular scope, and when the run time applies intersection
rules. For a summary of the Web Services Policy 1.5 Framework terminology, see the
end of this topic.
Note: If you want to use more than one WSRR object type to represent the
target service, you should always attach mediation policies to the most
granular scope point. For example, because a portType can apply to more than
one binding, any mediation policies you attach to the portType should contain
assertions that are binding independent. Be aware that by using more than one
WSRR object type to represent the target service, you can significantly increase
the level of complexity. The Web Services Policy 1.5 framework states that an
effective policy is calculated for each policy subject, and the effective policies are
then merged. Therefore, if you define mediation policies for all the target
service scope points that are supported, the run time would take the following
actions:
a. Calculate the effective policy for the endpoint policy subject, by merging the
mediation policies attached to the port, binding, and portType objects.
b. Calculate the effective policy for the service policy subject, by merging the
mediation policies attached to the service object.
c. Merge the effective policy of the endpoint policy subject with the effective
policy of the service policy subject. If property values conflict, the effective
policy of the endpoint policy subject takes precedence.
3. If the Policy Resolution primitive has a Policy Scope property of Intersection,
the run time applies policy intersection rules to determine if the two effective
policies are compatible (according to the Web Services Policy 1.5 framework).
In a message flow, you can override the values of dynamic properties defined for
the current SCA module. The run time applies the following rules to decide which
property values to apply to a message flow:
1. Mediation policies with gate conditions have the highest precedence. Therefore,
property values defined by these mediation policies have the highest
precedence.
Note: Before a mediation policy with gate conditions can be further evaluated,
all of the gate conditions must be met.
2. Mediation policies without gate conditions have a lower precedence. Therefore,
property values defined by these mediation policies have a lower precedence.
If multiple mediation policies (at the same precedence) contain the same dynamic
property, the mediation policies are merged. In this case, the value of the property
must be the same in all the merged mediation policies. Any mismatch is termed a
policy error, which means that there is no mediation policy for that policy merge.
Any dynamic property that has not been assigned a value from a mediation policy
document, is assigned the value shown by the administrative console.
In addition, the SCA module has the following mediation policies attached:
v Policy_X, with associated conditions
v Policy_XX, with associated conditions
v Policy_Y, with no associated conditions
In addition, the SCA module has the following mediation policies attached:
v Policy_X, with associated conditions
v Policy_XX, with associated conditions
v Policy_XXX, with associated conditions
v Policy_Y, with no associated conditions
There are mismatches between property values, at the same precedence: the
mismatches occur between the property values in Policy_X and Policy_XX.
Therefore, Policy_X and Policy_XX are not merged and none of the property values
in Policy_X, Policy_XX or Policy_XXX are used (even though there are no merge
errors in Policy_XXX). The mediation policy processing then processes mediation
policies at the next level of precedence. The only mediation policy at this level is
Policy_Y. Therefore, the values from Policy_Y are used.
Terminology
Web services policy terminology is described in the Web Services Policy 1.5
Framework and in the Web Services Policy 1.5 Attachment; in addition, the mediation
policy model uses some further terms.
The Web Services Policy 1.5 Framework includes the following terms:
Policy subject
A logical entity with which a policy can be associated. For example, there
is a service policy subject and an endpoint policy subject.
Policy scope
A point where you can attach a policy. A policy scope could relate to a
service, port, binding, portType, operation, or message.
Note: The service policy subject has one policy scope: service. However, the
endpoint policy subject has a number of policy scopes: port, binding, and
portType.
Effective policy
For a particular policy subject, the effective policy is calculated by merging
the policies that belong to the policy subject.
Introduction
In order to implement the correct mediation policy at run time, you need to
understand how the properties specified by the Policy Resolution mediation
primitive interact with the values of a particular message and the objects in the
registry.
You can specify mediation policy conditions in the Policy Resolution mediation
primitive. You specify where the mediation policy condition values are found in
the message, by providing XPath expressions.
Registry conditions
In the registry, you can load objects such as SCA modules and mediation policy
documents. When a mediation policy document specifies the objects to which it
applies (in this case an SCA module) WSRR creates a policy attachment document.
After a policy attachment document has been created you can add user properties
to it, and the run time interprets some of the user properties as necessary
conditions. Only user properties that begin with the string medGate_ are used as
conditions.
The following diagram shows an SCA module with three associated mediation
policies. Each mediation policy has a mediation policy attachment that specifies
conditions.
P2
P3
Mediation policy
Mediation policies
attachments
Runtime conditions
The message values are compared to the mediation policy conditions specified in
the registry. If all the conditions of a mediation policy are met, the mediation
policy can be used. The following table shows which registry policies can be used
with the example message.
Table 21. Suitable policies for the example message
Mediation Mediation policy Mediation policy condition met All mediation policy conditions
policy name condition by the example message? met by message?
P1 Continents = Asia No No
Days > 14 Yes
Note: Even if mediation policy conditions have been met, the properties defined in
a mediation policy might not be used. The mediation policy processing model
determines what information is taken from mediation policy documents, and
applied to message flows.
Introduction
When you use the Service Invoke mediation primitive inside a mediation flow, the
input message is used to call the service. If the call is successful, the response, or a
section of the response identified by one or more XPath expressions, is used to
create the output message. If the call is unsuccessful, you can retry the same
service, or call another service.
You can have multiple Service Invoke mediation primitives inside one mediation
flow. Therefore, you can have a series of service calls. You can use the Service
Invoke mediation primitive in a request or response mediation flow.
Generally, the initial service that the Service Invoke mediation primitive calls is
defined by the reference operation, which is a combination of the Reference name
property and the Operation name property. For a Service Invoke mediation
primitive in a subflow, the reference is defined on the subflow and resolved to a
reference in the parent flow when an instance of the subflow is created.
The Service Invoke mediation primitive has one input terminal (in) and multiple
output terminals. There is a fail terminal (fail) for unmodeled faults, and one
output terminal for each modeled fault. Modeled faults are those that are explicitly
listed in a WSDL file; any other fault is an unmodeled fault. In addition, there is
an output terminal (out), which is used for successful service calls, and a timeout
terminal (timeout), which is used for some types of asynchronous calls. Output
terminals that are created for a specific reason are classified as dynamic terminals.
For example, a WSDL-defined fault causes a dynamic output terminal to be
created.
The Service Invoke mediation primitive can operate in default mode or Message
Enrichment mode. You can configure the mode from the Select Reference
Operation window that opens when the mediation primitive is dropped on the
canvas:
v To use the default mode, ensure that the Message Enrichment mode check box
is clear.
In default mode, the input message, which is received at the in terminal, is passed
directly to the service, and the response message from the service invocation is
passed directly to the out terminal. The body and header sections of the response
message are propagated to the output message.
The in and out terminals are automatically set to the appropriate message types
for the interface and operation with which the mediation primitive is associated.
The input and output terminals of the Service Invoke mediation primitive reflect
the reference operation in the following way:
v The message type of the in terminal must match the request message type of the
reference operation.
v If there is a response message, the message type of the out terminal must match
the response message type of the reference operation.
The following figure shows the message flow in default mode, for a two-way
operation.
Service
The following table summarizes the operation of the terminals of the mediation
primitive in default mode.
The message type of the in and out terminals must match, but the type is initially
not set. When the in terminal is wired, its message type is defined implicitly by
the input message, and this message type is propagated to the out terminals.
The following figure shows how the message is enriched as it flows through the
mediation primitive in a two-way operation.
Service
Service Service
Request Response
Message Message
One or more XPath expressions used One or more XPath expressions used
to extract a section of the inbound SMO to store the response message into
for use as the request message the inbound SMO structure
in Service out
Response
Message
The following table summarizes the operation of the terminals of the mediation
primitive in Message Enrichment mode.
Table 23. Terminals of the Service Invoke mediation primitive in Message Enrichment mode
Terminal Terminal Dynamic terminal? Message type Terminal description
type name
Input in No Undefined until wire Receives the input message.
propagation causes the One or more XPath expressions
message type to be defined. are used to extract a section of
the inbound SMO for use as the
request message.
Usage
You can use the Service Invoke mediation primitive to help control the service
retry sequence. The retry sequence can be a combination of the following:
v Re-send the initial request to the initial service.
v Send the initial request to an alternate service.
v Send a new request to the initial service.
v Send a new request to an alternate service.
For more information about using the Service Invoke mediation primitive, see:
“Usage patterns” on page 175.
The name of the service reference to be called. The reference name is associated
with a WSDL interface. Initially, the reference name is set through an IBM
Integration Designer window, and cannot be changed afterward. You have to create
a new Service Invoke mediation primitive to change the reference name.
The name of the service operation to be called. The operation name is associated
with a WSDL operation. Initially, the Operation Name is set through an IBM
Integration Designer window; and cannot be changed afterward. You have to
create a new Service Invoke mediation primitive to change the operation name.
Determines whether the SMO header field Target, if present, should be used to
override the service endpoint specified by the reference operation. You can use the
Endpoint Lookup mediation primitive to set the Target field, or you can set the
field yourself.
Note: The Endpoint Lookup mediation primitive searches for service information
in WSRR, and only certain types of endpoint can be retrieved. For more
information, see: “Endpoint Lookup mediation primitive” on page 71.
The time to wait for a response, when a call is asynchronous with a deferred
response. The Async Timeout property is not used for calls that are asynchronous
with callback.
Require mediation flow to wait for service response when the flow component is
invoked asynchronously with callback. forceSync:
Set to true, (select the check box), to force a service call to act in a synchronous
manner. If true, an asynchronous call causes a deferred response, rather than a
callback. Set this property to true if the whole mediation flow is to run in a single
transaction. If you set this property to false and the mediation primitive is
involved in a FanOut/FanIn operation or is contained in a subflow, the run time
will override your setting and force the service call to act in a synchronous
manner.
Require mediation
flow to wait for
service response
when the flow
Preferred component is
interaction One-way or invoked
How the mediation flow style of the request- asynchronously with
component is called target response callback Invocation style
invoke (synchronous) ANY One-way true or false Async
Request- true or false Sync
response
SYNC Either true or false Sync
ASYNC Either true or false Async
invokeAsync (asynchronous with ANY One-way true or false Async
deferred response)
Request- true or false Sync
response
SYNC Either true or false Sync
ASYNC Either true or false Async
invokeAsyncWithCallback ANY One-way true or false Async
(asynchronous with callback)
Request- true Async
response
Request- false AsyncWithCallback
response
SYNC Either true or false Sync
ASYNC One-way true or false Async
Request- true Async
response
false AsyncWithCallback
Sync 1
If set to sync, the service invocation
is performed synchronously. This
setting can allow the service to be
included in the transaction scope of
the mediation flow when the service
or binding supports this function.
Async 2
Setting the property to async means
that the service invocation is
performed asynchronously, and the
service is outside the scope of the
mediation flow transaction. For a
one way operation, a reference
qualifier can be used to control
whether the asynchronous service
request is sent immediately, or when
the mediation flow transaction
commits. The async setting can also
allow an async timeout to be set for
a deferred response.
Note:
Default Default
Parameter parameterType
A preconfigured read-only value
identifying whether the element to
be transformed or updated forms
part of the input, output, or fault
message. The valid type is String.
Name name
A preconfigured read-only name for
the input, output, or fault
parameter type. The valid type is
String.
Type type
The type of the element value in
the message. The valid type is
String.
Value value
Specifies an XPath 1.0 expression
that identifies the message element
to be transformed or updated. Use
the XPath Expression Builder to
build a custom XPath expression.
The Propagate request headers to service being invoked check box is enabled
only when working in Message Enrichment mode. Select this check box if you
want the request message, which is sent to the service, to be populated with the
header of the input message that is received at the in terminal. When this check
box is clear, the header is excluded from the request message.
The Propagate response headers from service being invoked check box is enabled
only when working in Message Enrichment mode. Select this check box if you
want the output message to be populated with the response message header from
the service being invoked. When this check box is clear, the header of the input
message that was passed into the mediation primitive is used.
enrichmentMode:
Use this property to enable Message Enrichment mode, whereby a section of the
input message, which is received at the in terminal of the Service Invoke
mediation primitive, is used for the service invocation.
Retry on retryOn:
Determines whether, and how, fault responses cause a retry. This property is
applicable only to request-response operations.
Any fault 1
Modeled fault 2
Unmodeled fault 3
Note:
How many times a service call should be retried before an output terminal is fired.
The output terminal that is fired can be of the following types: modeled fault,
timeout or fail.
Determines if any alternate endpoints in the SMO should be used on retries. Set to
true, (select the check box), to try alternate endpoints.
This functionality is available only if the Use Dynamic Endpoint is also specified. If
any fault is returned by the initial service request, and the retry count is greater
than zero, the first endpoint from the alternate endpoint list is used for the retry. If
the retry returns a fault, and the next retry would not exceed the retry count, the
next alternate endpoint is used. After the last endpoint in the alternate endpoints
list is used, the initial endpoint is used again.
For example, suppose that the first endpoint is ServiceA, and the alternate
endpoints are ServiceB and ServiceC. If the retry count is 5, the sequence of
service calls is as follows: ServiceA, ServiceB, ServiceC, ServiceA, ServiceB,
ServiceC.
Considerations:
v Setting the Async timeout property to -1, (an indefinite wait), can have a
performance impact. If the service call is asynchronous with a deferred response,
server resources are consumed until a reply is received.
Default mode
In default mode, the header and body of the input message, which is received at
the in terminal, are used in the request message that is sent to the service. The out
terminal uses the header and body of the response message, and the context of the
input message to populate the output message.
The following figure shows how the SMOs are populated in the message flow.
Service
Header Header
Body Body
In this configuration, XPath expressions are specified within the input message that
will be used for the service invocation, and XPath expressions also specify where
to place the elements of the response message. The inbound XPath expressions are
used to construct a new body for the request message that is sent to the service.
The out terminal populates the output message by merging the elements of the
response message body with the contents of the input message that was passed
into the mediation primitive.
The following figure shows how the SMOs are populated in the message flow.
Service
Header
Body Body
Header Header
Service
Invoke Context
Context mediation
primitive
Body Body
Figure 12. SMO propagation in Message Enrichment mode with XPath only configured
In this configuration, XPath expressions are specified within the input message that
will be used for the service invocation, and XPath expressions also specify where
to place the elements of the response message. Additional settings are specified to
propagate request headers to the service being invoked and to propagate response
headers from the service being invoked. The inbound XPath expressions are used
to construct a new body for the request message that is sent to the service, and the
input message header is also used in the request message. The out terminal
populates the output message with the response message header, the context and
body of the input message that was passed into the mediation primitive, and also
merges in the elements of the response message body.
The following figure shows how the SMOs are populated in the message flow.
Header Header
Body Body
Header Header
Service
Invoke
Context Context
mediation
primitive
Body Body
Figure 13. SMO propagation in Message Enrichment mode with XPath, and request and
response header propagation configured
In this configuration, XPath expressions are specified within the input message that
will be used for the service invocation, and XPath expressions also specify where
to place the elements of the response message. An additional setting is specified to
propagate request headers to the service being invoked. The inbound XPath
expressions are used to construct a new body for the request message that is sent
to the service, and the input message header is also used in the request message.
The out terminal populates the output message with the contents of the input
message that was passed into the mediation primitive, and also merges in the
elements of the response message body.
The following figure shows how the SMOs are populated in the message flow.
Header Header
Body Body
Header Header
Service
Invoke Context
Context mediation
primitive
Body Body
Figure 14. SMO propagation in Message Enrichment mode with XPath and request header
propagation configured
In this configuration, XPath expressions are specified within the input message that
will be used for the service invocation, and XPath expressions also specify where
to place the elements of the response message. An additional setting is specified to
propagate response headers from the service being invoked. The inbound XPath
expressions are used to construct a new body for the request message that is sent
to the service. The out terminal populates the output message with the response
message header, the context and body of the input message that was passed into
the mediation primitive, and also merges in the elements of the response message
body.
The following figure shows how the SMOs are populated in the message flow.
Header
Body Body
Header Header
Service
Invoke Context
Context mediation
primitive Body
Body
Figure 15. SMO propagation in Message Enrichment mode with XPath and response header
propagation configured
Usage patterns
Usage patterns for the Service Invoke mediation primitive.
The following patterns let the Service Invoke mediation primitive: act as a proxy,
retry or combine services, augment messages, and aggregate responses.
You can use the Service Invoke mediation primitive to act as a proxy to an external
service.
The Service Invoke mediation primitive calls the service provider and the response
is returned directly to the user; you do not need to explicitly wire a response flow.
Figure 16. The Service Invoke mediation primitive acting as a proxy to an external service
You can use the Service Invoke mediation primitive to retry the same service if the
initial call is unsuccessful.
You could use this functionality for calling services across less reliable networks.
The Service Invoke mediation primitive calls the service provider and, if the call
fails, retries the call. You must set the following Service Invoke properties (in
addition to the Reference Name and Operation Name, which must always be set):
Table 24. The Service Invoke properties for retrying the same service
Property Setting
Retry On v Any fault if you want to retry after any fault.
v Modeled fault if you want to retry after faults defined by the WSDL.
v Unmodeled fault if you want to retry after faults not defined by the WSDL.
Retry Count Number of times you want to retry.
Retry Delay Delay (in seconds) between retry attempts.
Try Alternate Endpoints false (unchecked).
You can use the Service Invoke mediation primitive to provide a service that
combines the availability of multiple equivalent services.
If you want to retry alternate services that all use the same service interface, the
alternate endpoints must be placed in the AlternateTarget field of the service
message object (SMO). The alternate endpoints must have the same WSDL
portType (set of operations and associated messages).
If any fault is returned by the initial service request, and the retry count is greater
than zero, the first endpoint from the alternate endpoint list is used for the retry. If
the retry returns a fault, and the next retry would not exceed the retry count, the
next alternate endpoint is used. After the last endpoint in the alternate endpoints
list is used, the initial endpoint is used again. For example, suppose that the first
endpoint is ServiceA, and the alternate endpoints are ServiceB and ServiceC. If the
retry count is 5, then the sequence of service calls is as follows: ServiceA, ServiceB,
ServiceC, ServiceA, ServiceB, ServiceC.
The following figure shows how you can call a service and, if there is a fault, call
an alternate service. The figure assumes that you set the following properties:
Table 25. Endpoint Lookup and Service Invoke properties, for the calling of services with the same interface
Mediation Primitive Property Setting
Endpoint Lookup Match Policy Return all matching endpoints, and set alternate routing
targets
Figure 17. The Service Invoke mediation primitive retrying alternate services
At run time, if the registry called by the Endpoint Lookup mediation primitive
returns a number of service endpoints, the mediation flow is as follows:
1. A list of all the endpoints is put in the service message object (SMO) context.
2. The first matching endpoint is stored in the SMO Target field.
3. The remaining matches are put in the SMO AlternateTarget field.
4. The Service Invoke uses the Target field to make the initial service call.
5. If no fault is returned, this is the end of the Service Invoke processing.
6. If a fault is returned, the following actions occur:
a. The first endpoint from the AlternateTarget list is used for the first retry.
b. If the retry returns a fault and the next retry would not exceed the Retry
Count, the next endpoint from the AlternateTarget list is called.
c. After the last endpoint in the AlternateTarget list has been tried, the next
retry uses the Target field.
d. If the Target field call returns a fault, the first endpoint from the
AlternateTarget list is used.
e. The calls carry on using the same algorithm until a successful call is made,
or the Retry Count is reached.
You can use the Service Invoke mediation primitive to provide a service that
combines the availability of multiple equivalent services.
If you want to retry alternate services that use different service interfaces, you can
use multiple Service Invoke mediation primitives. On the first Service Invoke you
The following figure shows how you can call a service and if there is a fault, call
an alternate service that has a different WSDL portType (set of operations and
associated messages). In this example, the Service Invoke Retry On and Retry
Count properties are not used for the service retry. The retry is driven by a
modeled fault. The figure assumes the following The Service Invoke properties:
Table 26. Service Invoke properties for retry of a service with a different interface
Property Setting
Retry On Never. This is the default provided by the WebSphere Integration Developer tools.
Retry Count 0. This is the default provided by the WebSphere Integration Developer tools.
Use Dynamic Endpoint false (unchecked).
Try Alternate Endpoints false (unchecked).
Figure 18. The Service Invoke mediation primitive retrying an alternate service
You can use the Service Invoke mediation primitive to augment an input message,
in a similar way to the Database Lookup mediation primitive.
The following figure shows how you can save the original message in the SMO
context, call a service, and then create a new message from the original message
and the call response.
Figure 19. The Service Invoke mediation primitive augmenting an input message
You can use the Service Invoke mediation primitive to aggregate a number of
service responses into a single message. There are many aggregation patterns that
you can use, many of which use the Fan In and Fan Out mediation primitives. For
more information, see: “Fan Out mediation primitive” on page 97
The implication of this aggregation pattern is that the response from the first
service call provides the basis for the second service call. If the response from the
first service call does not match the request to the next service call, some mapping
must take place.
The following figure shows how you can: call a service, save the response, and
then call another service based on the information gained from the first service.
Figure 20. The Service Invoke mediation primitive aggregating service responses
Introduction
If the service call is asynchronous with callback, the initial thread does not wait
and can do other mediation work. This invocation style is not supported in an
aggregation block (an aggregation block is a group of mediation primitives that
occur between a Fan Out mediation primitive and a Fan In mediation primitive). If
the service call is asynchronous with deferred response, and the Service Invoke
mediation primitive is in an aggregation block, the initial thread can also do other
mediation work.
Details
The service call can be made using one of the following invocation styles.
Table 27. Invocation styles used by the Service Invoke mediation primitive
Invocation style Description
Synchronous The thread blocks and waits for the response. The response returns on the
same thread. The invoke style of Service Component Architecture (SCA)
invocation is used.
Asynchronous with deferred response The thread waits for the response. If the Service Invoke mediation primitive
is not in an aggregation block, the thread waits after each service request,
until a response is received. If the Service Invoke mediation primitive is in
an aggregation block, further processing of the aggregation can be
performed before the thread waits for responses to all outstanding service
requests. In both cases, the invokeAsync style of SCA invocation is used. For
a request-response operation, invokeResponse is used to retrieve the
response from the service. The async timeout property can be used to
specify the maximum time to wait for the response.
Note: If there is an existing transaction, the wait occurs inside the existing
transaction; therefore, the wait is also bound by the global transaction
timeout.
Asynchronous with callback The original thread does not wait for a response or callback. The original
thread continues and any further mediation primitives wired on the input
side of the Service Invoke mediation primitive are called. The Service Invoke
response is received on a new thread, and the new thread continues the
mediation flow from the Service Invoke mediation primitive.
Generally, the export binding of the mediation flow module determines the way
the mediation flow component is called. The mediation flow component is in a
mediation flow module: the mediation flow module has exports, and the exports
have bindings. Normally, the exports are wired directly to the mediation flow
component, although the exports can be wired to a Java SCA component and the
Java SCA component wired to the mediation flow component. Because Web
Services are inherently synchronous, and messaging systems are inherently
asynchronous, the export bindings have the following effect:
v An export with a Web Services binding invokes a mediation flow component
using invoke.
v For a one-way operation, an export with a JMS or MQ binding invokes a
mediation flow component using invokeAsync .
v For a request-response operation, an export with a JMS or MQ binding invokes a
mediation flow component using invokeAsyncWithCallback .
v An export with an SCA binding can invoke a mediation flow component using
invoke or invokeAsync or invokeAsyncWithCallback.
The invocation style that is used to call the service is determined by the
properties of the Service Invoke mediation primitive. For more information on
invocation style options, see: Service Invoke mediation primitive
The following figure shows a Message Filter mediation primitive with two output
terminals that are wired to other mediation primitives. Assume that these terminals
have associated filter patterns and that the Distribution mode property is set to
All. Therefore, if both patterns match, both output terminals are fired. The first
terminal to match is fired first, followed by the second terminal to match and so
on.
For the purposes of this example, assume that the mediation flow component is
called using invokeAsyncWithCallback, and that the preferred interaction style of
the references associated with the two callout nodes is asynchronous.
Figure 21. Using the Service Invoke mediation primitive for parallel processing
Note: The exact order of events is timing-dependent: the service call might
respond to ServiceInvoke1 at any time.
Callout Response
out
getQuote
modeled fault
getQuote
modeled fault
Callout Callout Fault
getQuote fail
Similarities between the Service Invoke mediation primitive and the callout:
v The in terminal of the Service Invoke corresponds to the in terminal of the
callout.
v The out terminal of the Service Invoke corresponds to the out terminal of the
callout response.
v The fail terminal of the Service Invoke corresponds to the fail terminal of the
callout response (for an unmodeled fault).
v A modeled fault output terminal of the Service Invoke, corresponds to a
modeled fault terminal of the callout fault.
Differences between the Service Invoke mediation primitive and the callout:
v The Service Invoke mediation primitive does not switch from request flow to
response flow.
v The Service Invoke mediation primitive does not modify either the transient
context or the correlation context.
Usage considerations
Introduction
You can use the Set Message Type mediation primitive to treat weakly-typed
message fields as though they are strongly-typed. A field is weakly-typed if it can
contain more than one type of data. A field is strongly-typed if its type and
internal structure are known. The Set Message Type mediation primitive lets you
do the equivalent of casting a generic data type to a more specific data type.
The Set Message Type mediation primitive lets you overlay message fields with
more detailed structures, and then use the more detailed structures in other
mediation primitives. For example, if a message field is defined to contain any
type of content but you know it will contain customer details, you might overlay
the generic structure with a customer details structure.
After you define how weakly-typed fields are to be interpreted, the WebSphere
Integration Developer tools show the more detailed structures. The more detailed
structures make it easier for you to manipulate the message content. For example,
you could create mappings that operate on the contents of the weakly-typed fields.
The Set Message Type mediation primitive has one input terminal (in), one output
terminal (out), and a fail terminal (fail). The in terminal is wired to accept a
message and the other terminals are wired to propagate a message. At run time, if
no exception occurs during the processing of the input message, the out terminal
propagates the unmodified message. However, if an exception occurs during the
processing of the input message, the fail terminal propagates the original message,
and stores the exception information in the failInfo element of the service
message object (SMO). The fail terminal is used if you enable validation, and the
input message fields do not conform to the strong data types you specify.
Details
After you wire together all parts of your mediation flow, and configure your
mediation primitives, every terminal has a terminal type with an associated
message type. When you specify strongly-typed data, using the Set Message Type
mediation primitive, this information is associated with the out terminal.
Using the Set Message Type mediation primitive you define how fields in the
input message are to be interpreted and this information is accessible to mediation
primitives that come later in the mediation flow. The information specified by the
Set Message Type persists through the rest of the mediation flow, unless another
mediation primitive changes the message type. For example, an XSLT mediation
primitive might change the message type, or another Set Message Type might reset
the strong-typing.
An ESB is often used to make routing decisions based on message content, and to
transform messages to a format understood by the service provider. In order to
make routing decisions, or specify transformations, it is useful to have a complete
representation of the message. If the XSLT mapping editor, or XPath wizard, shows
your message contains weakly-typed fields, you can overlay these with a detailed
representation using the Set Message Type mediation primitive. Without the Set
Message Type capabilities, you might have to write Java code for a Custom
Mediation primitive, or create a hand-written XSL stylesheet for an XSLT
mediation primitive.
You can use the Set Message Type mediation primitive to treat a weakly-typed
message field as though it is a strongly-typed field. You can also use the Set
Message Type mediation primitive to treat a strongly-typed field as though it is
contains data of a different strong type. However, the new strong type must be
derived from the strong type declared in the input message.
At run time, the Set Message Type mediation primitive lets you check that the
message content matches the specific data types you expect.
Use the Message field refinements property to specify which fields in the
message are refined by more specific typing information. By default, this property
is empty.
Generally, refine fields that are defined by the following XML schema weak types:
xsd:any, xsd:anyType, xsd:anySimpleType, and fields that are defined to be
replaceable using substitution groups. You specify the XPath of the message field
you want to refine, and the data type to use. IBM Integration Designer provides a
graphical interface to help you specify the XPath and data type. You can specify
refinements for more than one message field, but duplicate entries for the same
message field are not allowed.
If true, causes the current mediation primitive to reset Message field refinements
information from previous Set Message Type mediation primitives.
Validate validateInput:
If true, causes the Set Message Type mediation primitive to perform runtime
validation. The validation includes all message fields and not just those that you
have overlaid.
If there is a mismatch, an exception occurs and the fail terminal propagates the
original message, and stores the exception information in the failInfo element of
the SMO. The exception information stored is the text of message CWSXM3802.
For example, CWSXM3802E: The type at /body/operation1/input1/float is
’string’, while the asserted type states it should be ’float’.
Considerations:
v At run time, the Set Message Type mediation primitive does not affect the real
structure or content of a message. The Set Message Type mediation primitive
makes it easier for you to manipulate messages.
v The Validate property is the only promotable property of the Set Message Type
mediation primitive. Because the Validate property is promotable the runtime
administrator can turn validation on and off.
Introduction
You can use the SLA Check mediation primitive to check that a service level
agreement (SLA) exists.
The SLA Check mediation primitive has one input terminal (in), one fail terminal
(fail), and two output terminals (accept and reject). The in terminal is wired to
accept a message and the other terminals are wired to propagate a message. If the
information passed on the incoming message is used to successfully find a
matching SLA in WSRR, the accept terminal is fired. However, if a matching SLA
is not found, the reject terminal is fired.
Usage
In order to find a matching SLA in WSRR, the SLA Check mediation primitive uses
information in the incoming message. The SLA is matched on three parameters:
1. Endpoint. This field can be a literal value, or can be passed as part of the
message, and identifies the target endpoint that the consumer wants to call.
This field is mandatory, if not set, the reject terminal is fired.
2. Consumer Identifier. This field can be a literal value, or can be passed as part
of the message, and identifies the consumer of the target endpoint. This is an
optional field depending on whether the relevant artifacts in WSRR have the
identifier defined. For more information, see the WSRR documentation on the
governance enablement profile.
3. Context Identifier. This field can be a literal value or can be passed as part of
the message and identifies the context under which the consumer's invocation
of the target endpoint occurs. This is an optional field depending on whether
the relevant artifacts in WSRR have the identifier defined. For more
information, see the WSRR documentation on the governance enablement
profile.
Identifies the WSRR definition to be used by the SLA Check mediation primitive.
A WSRR definition is created using the server administrative console and provides
connection information for a WSRR instance. At least one WSRR definition needs
to exist on the server that your SCA module is installed to. If the Registry Name is
absent, the default WSRR definition is used.
Endpoint endpoint:
This field is either a literal string value or an XPath 1.0 expression. An XPath
expression indicates the location in the message to be interpreted as the endpoint
in the SLA. The SLA Check mediation primitive uses this identifier to determine
whether the consumer has the appropriate service level agreements in place to
make an invocation of this endpoint.
Consumer ID consumerId:
This field is either a literal string value or an XPath 1.0 expression that indicates
the location in the message to be interpreted as the consumer identifier. The
consumer identifier is an identifier that the consumer can pass in the header of the
service invocations it attempts. The SLA Check mediation primitive uses the
consumer identifier to determine whether the consumer has the appropriate service
level agreements to call the endpoint defined in the endpoint field.
This is an optional field, depending upon whether the associated artifacts in the
repository have the consumer identifier defined.
Context ID contextId:
This field is either a literal string value or an XPath 1.0 expression that indicates
the location in the message to be interpreted as the context identifier. The context
identifier is an identifier that the consumer can provide in the header of the service
invocations it attempts. The SLA Check mediation primitive uses the context
identifier to determine whether the consumer has the appropriate service level
agreements to call the endpoint defined in the endpoint field.
This is an optional field, depending upon whether the associated artifacts in the
repository have the context identifier defined.
Introduction
If you use WebSphere ESB to integrate your enterprise and model your business in
WSRR, using its Governance Enablement Profile (GEP), you can select dynamic
endpoints based on a number of factors modelled in that profile. For example:
v Whether the consumer of the endpoint has a valid SLA for the endpoint
v Whether the particular SLA is active
v Whether the endpoint is online
v Whether the endpoint has a certain desired classification; for example, whether it
is Production or Development.
The SLA Endpoint Lookup mediation primitive has one input terminal (in), one
fail terminal (fail), and two output terminals (out and noMatch). The in terminal is
wired to accept a message and the other terminals are wired to propagate a
message. If the information passed on the incoming message successfully finds a
Usage
Add extra parameters into the search to add further refinement of the endpoint
selection. Use the table of user defined parameters under the Advanced tab for this
purpose to supply extra selection parameters. However, if you add extra
parameters, the named query installed on the WSRR system must be altered to
match these additions. For more information about refining the use of endpoint
selection, see Endpoint selection with enhanced selection criteria.
Identifies the WSRR definition to be used by the SLA Endpoint Lookup mediation
primitive. A WSRR definition is created using the server administrative console
and provides connection information for a WSRR instance. At least one WSRR
definition must exist on the server to which your SCA module is installed. If the
Registry Name is absent, the default WSRR definition is used.
Consumer ID consumerId:
This field is either a literal string value or an XPath expression that indicates the
location in the message to be interpreted as the consumer identifier. The consumer
identifier is an identifier that the consumer can pass in the header of the service
invocations it attempts.
The SLA Endpoint Lookup mediation primitive uses the consumer identifier to
find the consumer in the SLA. Because most interactions will be through web
services and that the consumer identifier will not be part of the message payload
itself, the default value for this field is /headers/
SOAPHeader[name=’GEPGatewayHeader’]/value/consumerID. The
GEPGatewayHeader is a supplied schema specifically used for this purpose; the
XPath expression indicating that the value for the endpoint classification should be
taken from this particular field of a specific SOAP header in the incoming message.
You must enable the GEPGatewayHeader in the mediation module dependencies
section. This usage pattern can be overridden with another XPath expression or
literal value if necessary.
Context ID contextId:
This field is either a literal string value or an XPath expression which indicates the
location in the message to be interpreted as the context identifier. The context
identifier is an identifier that the consumer can provide in the header of the service
invocations it attempts.
The SLA Endpoint Lookup mediation primitive uses the context identifier to locate
the exact SLA for a particular consumer. Because most interactions will be through
Web services and that the context identifier will not be part of the message
payload itself, the default value for this field is /headers/
SOAPHeader[name=’GEPGatewayHeader’]/value/contextID. The GEPGatewayHeader
is a supplied schema specifically used for this purpose; the XPath expression
indicating that the value for the context identifier should be taken from this
particular field of a specific SOAP header in the incoming message. You must
enable the GEPGatewayHeader in the mediation module dependencies section.
This usage pattern can be overridden with another XPath expression or literal
value if necessary.
This field is either a literal string value or an XPath expression that indicates the
location in the message to be interpreted as the endpoint classification.
The SLA Endpoint Lookup mediation primitive uses this classification to further
refine the selection of endpoints associated with a particular SLA. Because most
interactions will be through web services and that the endpoint classification will
not be part of the message payload itself, the default value for this field is
http://www.ibm.com/xmlns/prod/serviceregistry/6/1/
GovernanceProfileTaxonomy#Development. When deploying to different
environments, change the default by using the Promoted Properties function.
These extra fields are either literal string values or XPath expressions that indicate
the location in the message where the parameter value is located.
The SLA Endpoint Lookup mediation primitive uses these extra parameters to
further refine the selection of endpoints associated with a particular SLA.
Value value
The value of the parameter can be
specified as either a literal or an
XPath expression.
Description description
A description of the parameter. It is
used solely for documentation, and
plays no part in the selection of the
endpoint.
Introduction
You can use the SOAP Header Setter mediation primitive to provide a mechanism
for managing SOAP headers in the message. You can change, copy, add, or delete
SOAP headers by setting the mediation primitive properties.
If you want multiple header changes you can set multiple actions. Multiple actions
are acted on sequentially, in the order you specify; this means that header changes
can build on each other.
You can search for SOAP headers that already exist in the service message object
(SMO) by specifying values to match on. If matching headers are found, they can
then be deleted from the message, copied to another location in the SMO, or have
fields set to specified values. If matching headers are not found, new headers can
be created using specified header field values.
The SOAP Header Setter mediation primitive has one input terminal (in), one
output terminal (out) and a fail terminal (fail). The in terminal is wired to accept a
message and the other terminals are wired to propagate a message. If the
mediation is successful, the out terminal propagates the modified message. If an
exception occurs during the transformation, the fail terminal propagates the
original message, together with any exception information contained in the failInfo
element.
Details
You can create a new SOAP header and specify the values it contains. The new
SOAP header element is added to the service message object (SMO) by appending
it to the end of the SOAP header list.
You can also find specific SOAP headers, in the list of SOAP headers that a
message contains, and set or update the header values.
Alternatively, you can copy the first SOAP header that matches your search criteria
to another location in the SMO (either in the SMO context or body).
Finally, you can find specific SOAP headers in the list of SOAP headers that a
message contains, and delete the headers.
You can set the SOAP header elements that you want to work with, using a
wizard. Depending on whether you want to create, set, copy, or delete headers, the
wizard presents different options. For example, to create a new header, the wizard
would help you to set the following properties: Create > Name, Namespace, and
Note: Firstly, you set the values for the search, and then you set the values for the
updates.
Defining the search criteria can be relatively complex, because a SOAP message
can contain multiple SOAP headers with the same name. When you define the
search criteria you can specify the SOAP header name, type, and namespace, as
well as the values of certain fields. For example, you might want to search for all
SOAP header of type WS-Addressing. Alternatively, you might want to search for a
SOAP header that contains a field called name whose value is Smith.
Generally, you specify the field values of SOAP headers, using either a literal value
or an XPath expression.
Usage
You can use the SOAP Header Setter mediation primitive to ensure that when a
SOAP message is sent to another system, the headers that are sent with the
message are correctly set.
Because the operations you define occur sequentially, a later operation can depend
on an earlier operation. For example, you could create a new header, copy it to
elsewhere in the SMO and then delete it from the list of headers it was initially
appended to.
You can also use the SOAP Header Setter mediation primitive to help to filter
messages, using the Message Filter mediation primitive. You might want to find a
particular header and make it available to be used in the filtering. For example,
you could copy a SOAP header to a more accessible place, and the Message Filter
primitive could then use the details inside the header. You might need to use the
Set Message Type mediation primitive between the SOAP Header Setter primitive
and the Message Filter primitive, (in order to do type casting).
A table of actions that you want to perform on SOAP header elements, in the
SMO. You can add to this table by clicking Add. Then follow the instructions of
the wizard.
Note: You might need to add references to the header type schemas you want to
use.
Values values
The SOAP header values that you want to amend or create, in the message SOAP
headers. You can set each value to be either a literal value or an XPath expression,
which is resolved at run time to provide the value. The value provided must be
compatible with the field where it is to be set. For example, if the field is of type
int, the value could validly be 14, but not GoldAccount.
Used only when the Mode property (or Action) is Create or Find and Set.
If the Mode property (or Action) is set to Find and Set, the wizard lets you use
Search Values to define the search criteria, and then use Values to specify the
values you want to set (when the search is satisfied).
Type headerType
The type of the SOAP header element.
Required Yes
Valid values Boolean
Note:
Default false
Considerations:
v If the Header Action is Find and Set and a header cannot be found, a new
header is created.
v If you attempt to set a header field to a value of incompatible type, a runtime
exception occurs.
v If the Target Destination resolves to more than one element in the SMO, a
runtime exception occurs.
v If the Validate input property is true and the input message is invalid, a
runtime exception occurs.
Introduction
The Stop mediation primitive has one input terminal (in) and no output terminals.
The input terminal accepts messages and consumes them without generating any
exception information.
Usage
You can use the Stop primitive to stop a particular path through a mediation flow,
without generating an exception; the message is consumed by the run time with no
further processing. Wiring a normal output terminal to a Stop primitive results in
the same runtime behavior as leaving the terminal unwired; wiring a fail terminal
to a Stop primitive causes exceptions to be silently consumed, rather than
propagated.
Introduction
Use mediation subflows to reuse common patterns in mediation flows, and also as
a way to group primitives in the Mediation Flow editor.
A mediation subflow has one or more in nodes, one or more mediation primitives,
and one more out nodes. The in and out nodes become the input and output
terminals on the subflow mediation primitive.
Details
Any property that you promote from a primitive in the top level request, response,
or fault flow is also a dynamic property. A dynamic property can be overridden, at
run time, using a mediation policy. Although you can override promoted
properties dynamically, you must always specify a valid default value.
Promoted properties have an alias name which, for a mediation primitive in the
top level request, response, or fault flow, is the name displayed on the runtime
administrative console. For a mediation primitive in a subflow, the alias name is
the name by which the property is known in the parent flow. You can set the alias
name and the alias value from WebSphere Integration Developer. Multiple
promoted properties can be given the same alias name if they are of the same type.
In one module or mediation module, promoted properties that have the same alias
name and group use the same value. Generally, you should choose a suitable alias
name for your promoted properties rather than accept the default name: choosing
a suitable name helps you identify properties at run time.
Usage
You can reuse mediation logic for common tasks by creating mediation subflows. A
mediation subflow can be invoked from multiple mediation flows, or multiple
times within the same mediation flow. A mediation subflow is invoked from a
parent mediation flow using a subflow mediation primitive.
These properties correspond to the properties promoted from primitives within the
subflow implementation.
Name name
The name of the property, which
corresponds to the alias name of
the promoted property.
Type type
The type of the property. For
example, STRING.
Value value
The value of the property.
Name name
The name of the reference in the
subflow implementation.
Interface interface
The interface of the reference.
Value value
The name of the reference in the
parent flow to which the reference
in the subflow is mapped.
Introduction
The Synchronous Transaction Rollback primitive has one input terminal (in), one
output terminal (out) and one fail terminal (fail). If you start the mediation flow
using a synchronous invocation style and the Synchronous Transaction Rollback
primitive is called, the current transaction is rolled back, the out terminal is fired,
and flow processing continues. If you start the mediation flow using an
asynchronous invocation style and the Synchronous Transaction Rollback primitive
is called, the fail terminal is fired and an error shows which indicates that the
Synchronous Transaction Rollback primitive has been called in an asynchronous
flow. In this case, the transaction is not rolled back.
Another way of rolling back a transaction in a mediation flow is by using the Fail
primitive, which stops the flow and sends an unmodelled fault back to the client.
With the Synchronous Transaction Rollback primitive, you explicitly roll back the
transaction and flow processing continues.
Use the Synchronous Transaction Rollback primitive if you want to explicitly roll
back the current transaction in a mediation flow, as part of an error handling
routine. For example, if an error has occurred when calling out to a target service,
the Synchronous Transaction Rollback primitive can be used to roll back a change
made to a database before the callout failed, and a modeled fault can be returned
to the client.
You can use the Synchronous Transaction Rollback mediation primitive to roll back
the current transaction under certain conditions. For example, if you wire an
output terminal of a Message Filter mediation primitive to a Synchronous
Transaction Rollback mediation primitive, the transaction is rolled back if the filter
condition occurs.
Introduction
You can use the Trace mediation primitive to develop and debug mediation flows,
and to specify trace messages to be logged to the server logs or to a file. The trace
messages can contain service message object (SMO) message content and other
information about the mediation flow.
The Trace mediation primitive has one input terminal (in), one output terminal
(out) and one fail terminal (fail). The in terminal is wired to accept a message and
the other terminals are wired to propagate a message. The input message triggers
the writing of a trace message and if the tracing is successful, the out terminal
propagates the original message. If an exception occurs during the processing of
the input message, the fail terminal propagates the original message, together with
any exception information.
Usage
You can use the Trace mediation primitive to write your own trace messages to
help with developing and debugging mediation flows.
You can use the Destination property to determine where a trace message is
written to: either the server system logs, the User Trace log file, or to a specified
file.
You can use the Enabled property to determine whether the Trace primitive is
mediated and is promotable, so that tracing can be switched off at run time.
Enabled enabled:
Destination destination:
User Trace 1
The trace messages are logged to a
UserTrace.log file in the server logs
directory, as specified by the
LOG_ROOT WebSphere Variable.
File 2
The trace messages are logged to the
file specified by the File property.
Note:
Default Local Server Log
Defines the file to which trace messages are logged. The file property is only valid
when the Destination property is File. If an absolute file path is specified, for
example C:\trace\MyTraceFile.txt, then the file is created and logged to in the
specified directory. If a relative file path is specified, for example
trace/MyTraceFile.txt, then the file is created and logged to in the server logs
directory, as specified by the LOG_ROOT IBM variable.
Message literal:
When the message is logged, the inserts are replaced by the information as shown
in the following table.
Table 28. Message property inserts
Element Item Description
{0} Time Stamp The UTC time stamp, indicating when the Trace primitive was invoked.
{1} Message ID The message ID, from the SMO.
{2} Mediation The name of the Trace mediation primitive instance that generated the trace
Name message.
{3} Module Name The name of the module, containing the Trace mediation primitive instance, that
generated the trace message.
{4} Message The SMO, or part of the SMO, as specified by the Root property XPath.
{5} Version The version of the SMO.
An XPath 1.0 expression representing the scope of the message and SMO to be
inserted into the trace message at insert {4}. You can specify your own XPath
expression. If you specify your own XPath expression, the part of the SMO you
specify is inserted. The message to be logged is converted to XML from the point
specified by Root.
Introduction
You can use the Type Filter mediation primitive with XPath expressions to direct
messages down different paths of a flow, based on their type.
The Type Filter mediation primitive has one input terminal (in), one fail terminal
(fail), and multiple output terminals, (one of which is the default terminal). The in
terminal is wired to accept a message and the other terminals are wired to
propagate a message.
Each of the output terminals, apart from the default terminal, is associated with an
XPath expression and a type. The elements of the message identified by the XPath
expressions are compared, in turn, with the associated type. The primitive always
uses the first matching output terminal. The default terminal is used if the
message meets none of the conditions.
If an exception occurs during the filtering, the fail terminal propagates the original
message, together with any exception information.
You can use the Type Filter mediation primitive to check that the inbound message
has elements of a specific type. If the criterion is not met you can raise a fault
using the Fail mediation primitive, or send an error response.
Enabled enabled:
Filters filters:
A list of XPaths, types, and associated terminal names, that define the filtering
performed by the mediation primitive.
Element xpath
An XPath 1.0 expression against
which the message is tested. The
expression is evaluated starting
from the XPath expression /, which
refers to the complete SMO.
Type type
The qualified type to be matched.
Introduction
You can use the UDDI Endpoint Lookup mediation primitive to retrieve service
endpoint information from a UDDI version 3 registry. The service endpoint
information relates directly to Web services.
To use the UDDI Endpoint Lookup mediation primitive you might need to add
service endpoint information to your registry.
The UDDI Endpoint Lookup mediation primitive lets you retrieve service endpoint
information that relates to the following:
v Web services using SOAP/HTTP.
v Web services using SOAP/JMS.
When the UDDI Endpoint Lookup mediation primitive receives a message it sends
a search query to the registry. The search query is constructed using the UDDI
properties that you specify and the query might return nothing, or it might return
one or more service endpoints. You can choose whether to be informed of all
endpoints that match your query, or just one endpoint that matches your query.
The UDDI Endpoint Lookup mediation primitive has one input terminal (in), two
output terminals (out and noMatch), and a fail terminal (fail). The in terminal is
wired to accept a message and the other terminals are wired to propagate a
message. If an exception occurs during the processing of the input message, the
fail terminal propagates the input message, together with any exception
Note: For the run time to implement dynamic routing on a request, you must set
the Use dynamic endpoint if set in the message header property in the callout
node or Service Invoke mediation primitive. You can specify a default endpoint
that the run time uses if it cannot find a dynamic endpoint. You specify a default
endpoint by wiring an import that has the default service selected.
Details
The UDDI Endpoint Lookup mediation primitive uses the Endpoint Reference
structure defined by the WS-Addressing specification. For more information, see:
http://schemas.xmlsoap.org/ws/2004/08/addressing.
Updates made to the SMO by the UDDI Endpoint Lookup mediation primitive are
dependent on the success of the registry query (matches are found during the
registry query) and the match policy.
The UDDI Endpoint Lookup mediation primitive can make updates to both the
SMO context (the primitiveContext element) and to SMO headers:
v /headers/SMOHeader/Target/address.
– Can contain the address of a service to call dynamically (the dynamic callout
address).
v /context/primitiveContext/EndpointLookupContext.
– Can contain the results of the UDDI query.
v /headers/SMOHeader/AlternateTarget
– Can contain a list of alternate service addresses. For more information on the
retry function, see: Combining multiple services.
If the UDDI Endpoint Lookup mediation primitive updates the SMO with one or
more endpoint addresses, it will also update the SMO so that each endpoint
address has an associated bindingType. The bindingType set by the UDDI Endpoint
Lookup mediation primitive is WebService.
To define the UDDI servers and specify the connection information: in the
Administrative console, navigate to Service integration > Web services > UDDI
References.
Usage
You can use the UDDI Endpoint Lookup mediation primitive, together with other
mediation primitives, to add security to dynamic routing. For example, you could
use the UDDI, Message Filter and XSLT mediation primitives to check whether an
endpoint was external or internal, and remove any internal information from
public messages. To do this you might:
1. Wire the matching output terminal of the UDDI Endpoint Lookup mediation
primitive to the input terminal of the Message Filter mediation primitive.
2. Use the Message Filter mediation primitive to check whether the URL was
internal or external, and route external messages to the XSLT mediation
primitive (by wiring one of the Message Filter output terminals to the XSLT
mediation primitive).
SOAP/HTTP example
The URI format in the case of an export with a Web service binding, is as follows:
http://<host>:<port>/<moduleName>/sca/<exportName>
The URI format in the general Web service case, (when a Web service is not
implemented by an export with a Web service binding), is as follows:
http://<host>:<port>/<service>
SOAP/JMS example
The URI format in the case of an export with a Web service binding, is as follows:
jms:/queue?destination=jms/WSjmsExport&connectionFactory=jms/WSjmsExportQCF&targetService=WSjmsExport_ServiceBJmsPort
The URI format in the general Web service case, (when a Web service is not
implemented by an export with a Web service binding), is as follows:
jms:/queue?destination=<destName>&connectionFactory=<factory>&targetservice=<service>
<soap:address location="jms:/queue?destination=jms/SOAP_JMSExport&;connectionFactory=jms/SOAP_JMSExportQCF
&;targetService=SOAP_JMSExport_BigEchoJmsPort"/> </wsdl:port>
</wsdl:service>
</wsdl:definitions>
Example WSDL line split to fit page width, between jms/SOAP_JMSExportQCF and
&;targetService=SOAP_JMSExport_BigEchoJmsPort
Determines how many service endpoints should be added to the message if the
registry has more than one service matching your query.
If no match is found during a registry query, regardless of the match policy, the
noMatch output terminal is fired and the original input SMO is propagated.
The following table summarizes the effect of the match policy on the SMO
elements:
Required Yes
routing target 1
v If the registry query returns
matches, the following occurs:
– The dynamic callout address,
in the SMO header, is updated
with one service address from
the results returned.
– The SMO context is updated
with registry information
relating to the address in the
dynamic callout address.
– The alternate targets list, in the
SMO header, is cleared.
The name of the technical model to search for. This property is optional, with a
valid type of String.
The maximum number of TModels which UDDI can check that a given Service
supports is 64, and 5 by default, as determined by the configuration parameter that
has been set.
A list of find qualifiers (as specified in the UDDI specification). This property is
optional.
Considerations:
v If the Use dynamic endpoint if set in the message header property is not set
in the callout node, the run time does not use the dynamic endpoint in
/headers/SMOHeader/Target/address. In this case, the run time uses the default
endpoint if there is one, or throws an error.
v The above information only applies to UDDI Servers that support the UDDI
version 3 specification.
Introduction
You can use the XSL Transformation mediation primitive to transform messages
using XSL transformations.
When you are integrating services, you often need to transform data into a format
that the receiving service can process; the XSL Transformation mediation primitive
lets you transform one message type into a different message type.
The XSL Transformation primitive has one input terminal (in), one output terminal
(out), and a fail terminal (fail). The in terminal is wired to accept a message and
the other terminals are wired to propagate a message. The input message triggers a
transformation and if the transformation is successful, the out terminal propagates
the modified message. If an exception occurs during the transformation, the fail
terminal propagates the original message, together with any exception information
contained in the failInfo element.
Details
The XSL Transformation primitive gives you a simple mechanism for manipulating
messages, using an XSLT 1.0 transformation. You can change the headers, context,
or body of the SMO by mapping between the input and output message.
When you create an XML map you specify the message root, (an XPath 1.0
expression), which for mediation flows can refer to the following locations in the
SMO: /, /headers, /context or /body. The message root specifies the root of the
transformations, and applies to both input messages and output messages. If the
message root is /, the transformation applies to the whole SMO.
If you use the XML mapping editor then, after the mapping has been created, an
XSL stylesheet is generated to perform the transformation at run time. If you wire
the input and output terminals of the XSL Transformation primitive before using
the XML mapping editor, the input and output message types are already entered
for you.
You do not have to use the XML mapping editor. Alternatively, you can use an
existing XSL stylesheet to perform your transformation. The stylesheet must exist
in the mediation module project directory before you can select it.
Migration
Usage
If you need to connect mediation primitives whose message types are different,
you can use the XSL Transformation mediation primitive to transform the message
type.
The XSL Transformation mediation primitive can be useful if you want to:
v Manipulate data, before or after the Database Lookup mediation primitive is
invoked.
v Copy the response from the Service Invoke mediation primitive into the shared
context.
v Create a new message body, using data in the shared context, after the Fan In
mediation primitive.
You can transform messages using either the XSL Transformation primitive or the
Business Object Map mediation primitive. The key difference is that the XSL
Transformation primitive performs transformations in XML, using a stylesheet,
whereas the Business Object Map primitive performs transformations on business
objects, using Service Data Objects (SDO). If you have existing XML maps, or XSL
stylesheets, you might be able to reuse them with the XSL Transformation
primitive; and if you have existing business object maps you might be able to
reuse them with the Business Object Map primitive. Some kinds of transformation
are easier to perform in XSL, and others using a business object map.
An XPath 1.0 expression that specifies the root of the transformation. This property
is used for both the input message and the transformed message. When you create
a new XML map, you can specify the following message roots: /, /headers,
/context or /body.
If you select /, /headers or /context as the root, you need to explicitly map all the
SMO sections, using the XML mapping editor. Otherwise, you might get errors at
run time. If you do not need to change any information in the headers or contexts
sections of the SMO, you can use /body as the mapping starting point.
XSLTransform:
Specifies the name of the XML mapping file, or the XSL stylesheet, that the
mediation primitive uses. You can choose either an XML mapping file, with an
associated XSL stylesheet, or an XSL stylesheet on its own.
You can browse existing XML mapping files or create a new mapping file. An XML
mapping file has a generated XSL stylesheet that performs the transformation at
run time.
You can browse existing XSL stylesheets, if they exist in the same project as the
mediation module.
If you want to override the Mapping file value dynamically, at run time, you must
promote the Mapping file property. (However, you must still provide a default
value for the Mapping file property.) The override value must resolve to an XSL
stylesheet. The stylesheet can be specified as a path to a resource in the project; for
example, xslt/TransformCustomer.xsl. Alternatively, the stylesheet can be
specified as a URL; for example, http://myserver.com/customerstylsheet.xsl or
file://c:/customerstylesheet.xsl.
If true, the input message is validated at run time, before the mediation is
performed.
SMOVersion:
SMO60 SMO60
Note:
Default SMO60
Considerations:
v If the Mapping file property is not valid, it causes an exception at run time.
v If Validate input property is true, the input message is validated against its
schema, at run time. If the input message does not match its schema, an
exception occurs.
Imports and exports have associated bindings that define the communication
mechanism (for example, Web service bindings [SOAP/HTTP or SOAP/JMS]) and
configuration that provides the details of the transport connection and the format
of messages that flow on that connection.
Data bindings and data handlers are associated with import and export bindings to
allow the message format to be configured.
This section includes some specific topics regarding the use of import and export
bindings and data bindings and data handlers.
Messaging bindings
WebSphere ESB supports a number of messaging bindings, which are used for
interoperability with various messaging systems.
Note: When you install an SCA module containing either WebSphere MQ JMS
bindings or WebSphere MQ bindings, the run time is automatically configured to
Dynamic invocation
WebSphere ESB supports re-routing of messages, by dynamic override of
statically-defined endpoints or dynamic invocation using a target import.
WebSphere ESB also supports dynamic redirection of response messages.
For some applications, you can override or change some of these static values at
run time. You might do this dynamically by overriding a value specified for an
endpoint address. Alternatively, you can select a new target import. In each case,
the message flow changes according to the information in the message.
For example, you can use Integration Designer to create bindings that contain
endpoint information specifying the location of a remote service. This static
endpoint information can be overridden dynamically by information carried in the
message. The dynamic information might specify a different endpoint for the
message. You can access the endpoint using one of several supported bindings,
including Web service, HTTP, Java Message Service (JMS), and WebSphere MQ.
There are three main ways that dynamic invocation takes place:
v Dynamic override of a static endpoint, in which a service is invoked using any
supported import binding. The service is available at a different endpoint from
the endpoints originally specified when the mediation module was created and
deployed.
v Dynamic invocation with a target import, in which a service is invoked using
any supported import binding. The binding must already be available within the
mediation module. It is selected at run time according to information contained
in the message.
v Pure dynamic invocation, in which a service is invoked without needing an
import in the mediation module. No additional information is required apart
from the details in the message.
Any response to a dynamic invocation returns using the same route as the
dynamic invocation. It is not possible to dynamically override the routing for the
response to an outbound dynamic invocation. It is possible to dynamically
override the routing for response messages returned by way of the Web service
(JAX-WS) export binding.
Introduction
This capability is used when the service is available from a different endpoint than
originally specified when the mediation module was created and deployed.
You can use Integration Designer to create a mediation module that selects the
target service dynamically at run time. This module includes an import associated
with the reference used to make the dynamic invocation. The import has a binding
configuration suitable for a set of target services which all use the same protocol
and accept messages which use the same data format. The target service is selected
from the alternate targets by applying appropriate qualifiers to the reference and
import. For example, the target service could be selected from a number of MQ
queues, each providing a related service. The choice of target to invoke is made
dynamically using available metadata provided in the message.
Any response from the new service target returns through the normal response
flow process, back to the original caller of the export.
Service endpoint information can be stored in any easily accessed form, such as
WSRR or a database. For example, non-Web service or SCA endpoints could be
stored in WSRR as standalone endpoints, not requiring WSDL. The service
endpoints could be added to WSRR manually, or automatically published from
SCA exports in deployed modules. Within mediation flow components, endpoints
could be obtained using the Service Registry or Database Lookup mediation
primitives, or set using MFC capabilities in the SMO.
For SCA endpoints, the choice of which endpoint to use is set in the EPR used to
make the invocation. In this case it is not possible to specify alternate targets.
Examples
Service Provider 1
Mediation Module 1
POJO Import
Service Provider 2
Figure 23 shows how dynamic invocation works when the POJO in Mediation
Module 1 receives a message. The message is sent to Service Provider 1 through
the existing static endpoint configured when the mediation module was developed
and deployed. Optionally, the incoming message contains information that sends
the message to a different endpoint. In Figure 23, the message could be sent to
Service Provider 2 by using an optional Endpoint Reference object. The POJO
extracts the endpoint information from the message and identifies the endpoint at
Service Provider 2, rather than the endpoint at Service Provider 1 as specified in
the original deployment. The POJO uses the SCA Endpoint Reference API, and the
reference wired to the Import, to dynamically invoke the remote service identified
by the endpoint in the message.
Service Provider 2
A new target address is set for the message using the incoming message content
and routing information. The Mediation Flow Component sets the new address by
changing SMO values, using one or more of several possible mediation primitives,
such as Database Lookup, Message Element Setter, Business Object Map, or XSL
Transformation.
Details about the EPR can be obtained by retrieving information about the
endpoint from a suitable storage location, such as WSRR or a database. This
information is used to set EPR details for the message using the SCA Endpoint
Reference API. For example, a POJO can update the endpoint address using code
similar to the following:
EndpointReference epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setAddress(uri);
epr.setBindingType(EndpointReference.BINDING_TYPE_JMS);
Service dynamicService = (Service) ServiceManager.INSTANCE.getService(refname, epr);
A target import can be set directly using the SCA Endpoint Reference API. If the
dynamic override requires a target import, it can be specified by including code
that provides the name of the module import. For example, a POJO can set a target
import using code similar to the following:
EndpointReference epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setImport("this/is/the/name/of/the/import");
epr.setAddress(uri);
epr.setBindingType(EndpointReference.BINDING_TYPE_JMS);
Service dynamicService = (Service) ServiceManager.INSTANCE.getService(refname, epr);
An Endpoint Reference binding type value ensures that the correct dynamic
invocation takes place. Possible values for the binding type include:
EndpointReference.BINDING_TYPE_NOT_SET
EndpointReference.BINDING_TYPE_JMS
EndpointReference.BINDING_TYPE_MQJMS
EndpointReference.BINDING_TYPE_GENERIC_JMS
EndpointReference.BINDING_TYPE_MQ
EndpointReference.BINDING_TYPE_WEB_SERVICE
EndpointReference.BINDING_TYPE_HTTP
EndpointReference.BINDING_TYPE_SCA
EndpointReference.BINDING_TYPE_EIS
EndpointReference.BINDING_TYPE_WEB_SERVICE_SOAP_1_1
EndpointReference.BINDING_TYPE_WEB_SERVICE_SOAP_1_2
Introduction
You can invoke services using EIS import bindings with endpoints that are
different from those specified in the import. For EIS bindings where the import is
connected to JDBC adapters, you can specify a dynamic endpoint using a URI that
represents the JNDI name of a connection factory.
Service Provider 1
Mediation Module 1
POJO Import
Service Provider 2
Figure 25. Illustration of endpoint override by dynamic invocation, with wired import
You can create a mediation module that includes the dynamic endpoint using
Integration Designer.
You can use the SCA public API to override the endpoint address. In the following
example code, the uri value must conform to the JCA URI standard.
epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setAddress(uri);
epr.setBindingType(bindingType);
Service dynamicService = (Service) ServiceManager.INSTANCE.getService(refname, epr);
An SCA endpoint reference is created. The endpoint reference stores the JNDI for
the connection factory that generates the dynamic endpoint. The SCA endpoint
reference is stored in the Endpoint Reference in the SCA message. When an SCA
message is received, the EIS Import handler identifies the Endpoint Reference in
the message and uses this to find the JNDI of the connection factory that generates
a target address. The EIS binding uses the connection factory to obtain the target
address of the dynamic endpoint. If a target address is found, the message is sent
there. If no target address is found, the message is sent to the original endpoint.
scheme
The scheme for JCA URI is always jca.
jca-variant
The jca-variant provides more information about the JCA connection and is
always jndi.
jndiName
This identifies the JCA connection factory providing the dynamic target
address for a message.
jca:jndi:dynamicTestJNDI
This URI tells the EIS Import handler to look for a connection factory defined in
JNDI as dynamicTestJNDI.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create Mediation Module 1, containing a POJO wiring to Import.
2. Check that the Import is an SCA-JCA import, statically configured with an
interaction specification and a connection specification.
3. Configure the Import to route messages to Service Provider 1, which, in this
case, would be an Enterprise Information System (EIS).
4. Implement the POJO component to use the Endpoint Reference API to specify a
URL identifying Service Provider 2 (a second EIS).
5. Deploy the module to the server.
Dynamic invocation for JDBC adapters takes place when the POJO is invoked with
JNDI names for the connection factory, the new interaction specification, and the
new connection specification. The POJO extracts the endpoint information and
connection attributes from the message. The POJO uses the SCA Endpoint
Instead of using JDBC, the adapters might be for CICS, IMS, or SAP. The adapters
that are connected to the import must have the same portType. For example, it is
an error to use JDBC for Service Provider 1 (the first EIS) at the same time as using
CICS for Service Provider 2 (the second EIS).
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
Introduction
You can invoke services using EIS import bindings that are not directly wired to
your component, using endpoints that are not those specified in the import
bindings. For EIS bindings where the import is connected to JDBC adapters, you
can specify a dynamic endpoint using a URL that represents the JNDI name of a
connection factory.
Service Provider 1
Mediation Module 1
POJO Import
Service Provider 2
Figure 26. Illustration of endpoint override by dynamic invocation, with unwired import
You create a mediation module that includes the import selection and dynamic
endpoint using Integration Designer.
You can use the SCA public API to override the import name and endpoint
address. In the following example code, the uri value must conform to the JCA
URI standard. The import name identifies an import with a JCA binding in the
same SCA module.
epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setAddress(uri);
epr.setBindingType(bindingType);
epr.setImport("Import1");
Service dynamicService = (Service) ServiceManager.INSTANCE.getService(refname, epr);
Identify the correct endpoint type by adding a binding type attribute to the
endpoint reference.
An SCA endpoint reference is created. The endpoint reference stores the JNDI for
the connection factory that generates the dynamic endpoint. The SCA endpoint
reference is stored in the Endpoint Reference in the SCA message. When an SCA
message is received, the EIS Import handler identifies the Endpoint Reference in
the message and uses this to find the JNDI of the connection factory that generates
a target address. The EIS binding uses the connection factory to obtain the target
address of the dynamic endpoint. If a target address is found, the message is sent
there. If no target address is found, the message is sent to the original endpoint.
scheme
The scheme for JCA URI is always jca.
jca-variant
The jca-variant provides more information about the JCA connection and is
always jndi.
jndiName
Identifies the connection factory providing the dynamic target address for
a message.
jca:jndi:dynamicTestJNDIUnwired
This URI tells the EIS Import handler to look for a connection factory defined in
JNDI as dynamicTestJNDIUnwired.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create Mediation Module 1, containing a POJO and an unwired Import.
2. Check that the Import is an SCA-JCA import, statically configured with an
interaction specification and a connection specification, but without any
connection factory settings.
Dynamic invocation for JDBC adapters takes place when the POJO is invoked with
JNDI names for the connection factory, the new interaction specification, and the
new connection specification. The POJO extracts the endpoint information and the
connection attributes from the message. The POJO uses the SCA EPR API and the
reference wired to the Import to invoke the remote service. The interaction
specification and connection specification information is retained in the EPR
properties table for later re-use. Any response is returned by the response flow to
the POJO.
Instead of using JDBC, the adapters might be for CICS, IMS, or SAP. The adapters
that are connected to the import must have the same portType. For example, it is
an error to use JDBC for Service Provider 1 (which, in this case, is an EIS) at the
same time as using CICS for Service Provider 2 (a second EIS system).
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
Introduction
You can invoke services using endpoints that are different from those specified in
the import. For HTTP bindings, you can specify a dynamic endpoint using a URI
that conforms to the HTTP URI standard.
Mediation Module 1
POJO Import
Service Provider 2
Figure 27. Illustration of endpoint override by dynamic invocation, with wired import
You can create a mediation module that includes the dynamic endpoint by
performing tasks in Integration Designer.
You can use the SCA public API to override the endpoint address. In the following
example code, the uri value must conform to the HTTP URI standard.
epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setAddress(uri);
epr.setBindingType(bindingType);
Service dynamicService = (Service) ServiceManager.INSTANCE.getService(refname, epr);
DataObject customer = createCustomer(refname, "twoway", uri);
The HTTP URI has the same prefix as a Web service SOAP/HTTP endpoint
address. Identify the correct endpoint type by adding a binding type attribute to
the endpoint reference. If you do not specify the binding type attribute for the
HTTP URI, the address is interpreted as a SOAP/HTTP endpoint, even when the
endpoint reference is wired to an HTTP import.
The HTTP endpoint used in the dynamic invocation is structured according to the
HTTP URI standard.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create Mediation Module 1, containing a POJO wiring to Import.
2. Configure the Import to route messages to Export 2.
3. Implement the POJO component to use the Endpoint Reference API to specify a
URL identifying Service Provider 2.
4. Deploy the module to the server.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
Introduction
You can invoke services using HTTP import bindings that are not directly wired to
your component, using endpoints that are not those specified in the import
bindings. For HTTP bindings, you can specify a dynamic endpoint using a URI
that conforms to the HTTP URI standard.
Service Provider 1
Mediation Module 1
POJO Import
Service Provider 2
Figure 28. Illustration of endpoint override by dynamic invocation, with unwired import
You create a mediation module that includes the import selection and dynamic
endpoint using Integration Designer.
You can use the SCA public API to override the import name and endpoint
address. In the following example code, the uri value must conform to the HTTP
URI standard. The import name identifies an import with an HTTP binding in the
same SCA module.
epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setAddress(uri);
epr.setImport(importName);
The HTTP URI has the same prefix as a Web service SOAP/HTTP endpoint
address. Identify the correct endpoint type by adding a binding type attribute to
the endpoint reference. If you do not specify the binding type attribute for the
HTTP URI, the address is interpreted as a SOAP/HTTP endpoint, even when the
endpoint reference is wired to an HTTP import.
The HTTP endpoint used in the dynamic invocation is structured according to the
HTTP URI standard.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create Mediation Module 1, containing a POJO and Import that are not wired
together.
2. Configure the Import to route messages to Export 2.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
Introduction
You can invoke services by using endpoints that are different from those specified
in the import. For JMS bindings, you can specify a dynamic endpoint using a URL
that conforms to the JMS URI standard.
Mediation Module 1
POJO Import
Service Provider 2
Figure 29. Illustration of endpoint override by dynamic invocation, with wired import
You create a mediation module that includes the dynamic endpoint by performing
tasks in Integration Designer.
You can use the SCA public API to override the endpoint address. In the following
example code, the uri value must conform to the JMS URI standard.
epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setAddress(uri);
epr.setBindingType(bindingType);
Service dynamicService = (Service) ServiceManager.INSTANCE.getService(refname, epr);
The JMS URI has the same prefix as a Web service SOAP/JMS endpoint address.
Identify the correct endpoint type by adding a binding type attribute to the
endpoint reference. If you do not specify the binding type attribute for the JMS
URI, the address is interpreted as a SOAP/JMS endpoint, unless the endpoint
reference is wired to a JMS import.
The JMS endpoint used in the dynamic invocation is structured according to the
JMS URI standard.
In summary, the standard requires that JMS URIs have the form:
scheme
The scheme for a JMS URI will always be jms.
jms-variant
The jms-variant provides more information about the JMS connection (for
example, by using the variant jndi).
jms-dest
This identifies the JMS destination object and should correspond to the
jms-variant.
parameter
Parameter is a key value pair separated by "=". The only key supported is
Managing security
The JMS Connection Factory uses application-managed security. It does not use
container-managed security. This means that you must set the component-managed
authentication alias.
The input name for the send destination and the connection factory must already
be defined in the server.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create Module 1, containing a POJO wiring to Import.
2. Configure the Import to route messages to Export 2.
Dynamic invocation takes place when the POJO is invoked with Export 3
identified as the endpoint in the message. The POJO extracts the endpoint from the
message and identifies Export 3 as the endpoint, rather than the Export 2 endpoint
specified in the original deployment. The POJO uses the SCA Endpoint Reference
API, and the reference wired to a JMS Import, to invoke the remote service
specified by the endpoint in the message. The remote service is invoked using the
wired JMS Import. After the service is invoked, a response is returned to the POJO.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
Introduction
You can invoke services using JMS import bindings that are not directly wired to
your component, using endpoints that are not those specified in the import
bindings. For JMS bindings, a dynamic endpoint can be specified using a URI that
conforms to the JMS URI standard.
Mediation Module 1
POJO Import
Service Provider 2
Figure 30. Illustration of endpoint override by dynamic invocation, with unwired import
You create a mediation module that includes the import selection and dynamic
endpoint using Integration Designer.
All the endpoints must use the same import binding configuration. The POJO
identifies the required import in the endpoint reference (EPR) and then uses SCA
to wire to a compatible import.
You can use the SCA public API to override the import name and endpoint
address. In the following example code, the uri value must conform to the JMS
URI standard. The import name identifies an import with a JMS binding in the
same SCA module.
epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setAddress(uri);
epr.setImport(importName);
Service dynamicService = (Service) ServiceManager.INSTANCE.getService(refname, epr);
The JMS URI has the same prefix as a Web service SOAP/JMS endpoint address.
Identify the correct endpoint type by adding a binding type attribute to the
endpoint reference. If you do not specify the binding type attribute for the JMS
URI, the address is interpreted as a SOAP/JMS endpoint, unless the endpoint
reference is wired to a JMS import.
The JMS endpoint used in the dynamic invocation is structured according to the
JMS URI standard.
In summary, the standard requires that JMS URIs have the form:
scheme
The scheme for a JMS URI will always be jms.
jms-variant
The jms-variant provides more information about the JMS connection (for
example, by using the variant jndi).
Managing security
The JMS Connection Factory uses application-managed security. It does not use
container-managed security. This means that you must set the component-managed
authentication alias.
The input name for the send destination and the connection factory must already
be defined in the server.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create Module 1, containing a POJO and an unwired Import.
2. Check that the Import is configured to route messages to Export 2.
Dynamic invocation takes place when the POJO is invoked with the target import
and Export 3 identified as the endpoint in the message. The POJO uses SCA EPR
API to resolve the Import. The POJO extracts the endpoint from the message and
identifies Export 3 as the endpoint, rather than the Export 2 endpoint specified in
the original Import. The POJO invokes the remote service specified by the
endpoint in the message through the target import specified in the message. After
the service is invoked, a response is returned to the POJO.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
Introduction
You can invoke services by using endpoints that are different from those specified
in the import. For MQ bindings, you can specify a dynamic endpoint by using a
Service Provider 1
Mediation Module 1
POJO Import
Service Provider 2
Figure 31. Illustration of endpoint override by dynamic invocation, with wired import
You can create a mediation module that includes the dynamic endpoint by
performing tasks in Integration Designer.
You can use the SCA public API to override the endpoint address. In the following
example code, the uri value must conform to the MQ URI standard.
epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setAddress(uri);
Service dynamicService = (Service) ServiceManager.INSTANCE.getService(refname, epr);
For dynamic invocation, the MQ endpoint must have a URI with the form:
In each URI, the queueName and optional destination queue manager qmgr override
the destination queue specified on the import binding. The hostname, port, and
connectQueueManager override the connection information on the import binding.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create Module 1, containing a POJO wiring to Import.
2. Configure the Import to route messages to Export 2.
Dynamic invocation takes place when the POJO is invoked with Export 3
identified as the endpoint in the message. The POJO extracts the endpoint from the
message and identifies Export 3 as the endpoint, rather than the Export 2 endpoint
specified in the original deployment. The POJO uses the SCA EPR API, and the
reference wired to the Import, to invoke the remote service specified by the
endpoint in the message. After the service is invoked, a response is returned to the
POJO.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
If the POJO is invoked with an empty or missing argument for the endpoint, the
default invocation is used, calling Export 2 and POJO 2.
Introduction
You can invoke services using MQ import bindings that are not directly wired to
your component, using endpoints that are not those specified in the import
bindings. For MQ bindings, you can specify a dynamic endpoint by using a URI
that conforms to an MQ URI standard.
Service Provider 1
Mediation Module 1
POJO Import
Service Provider 2
Figure 32. Illustration of endpoint override by dynamic invocation, with unwired import
You create a mediation module that includes the import selection and dynamic
endpoint using Integration Designer.
All the endpoints must use the same import binding configuration. The POJO
identifies the required import in EPR, and then uses SCA to wire to a compatible
import.
You can use the SCA public API to override the import name and endpoint
address. In the following example code, the uri value must conform to the MQ
URI standard. The import name identifies an import with an MQ binding in the
same SCA module.
epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setAddress(uri);
epr.setImport(importName);
Service dynamicService = (Service) ServiceManager.INSTANCE.getService(refname, epr);
For dynamic invocation, the MQ endpoint must have a URI with the form:
In each URI, the queueName and optional destination queue manager qmgr override
the destination queue specified on the import binding. The hostname, port, and
connectQueueManager override the connection information on the import binding.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create Module 1, containing a POJO and an unwired Import.
2. Configure the Import to route messages to Export 2.
Dynamic invocation takes place when the POJO is invoked with the target import
and Export 3 identified as the endpoint in the message. The POJO uses SCA EPR
API to resolve the Import. The POJO extracts the endpoint from the message and
identifies Export 3 as the endpoint, rather than the Export 1 endpoint specified in
the original Import. The POJO invokes the remote service specified by the
endpoint in the message through the target import specified in the message. After
the service is invoked, a response is returned to the POJO.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
Introduction
You can invoke services using endpoints that are different from those specified in
the import. For Web Service bindings, you can specify a dynamic endpoint using a
URL that conforms to the Web services URI standard.
Service Provider 1
Mediation Module 1
POJO Import
Service Provider 2
Figure 33. Illustration of endpoint override by dynamic invocation, with wired import
You can create a mediation module that includes the dynamic endpoint by
performing tasks in Integration Designer.
You can use the SCA public API to override the endpoint address. In the following
example code, the uri value must be in a valid Web services format.
epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setAddress(uri);
Service dynamicService = (Service) ServiceManager.INSTANCE.getService(refname, epr);
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
Dynamic invocation takes place when the POJO is invoked with Export 3
identified as the endpoint in the message. The POJO extracts the endpoint from the
message and identifies Export 3 as the endpoint, rather than the Export 2 endpoint
specified in the original deployment. The POJO uses the SCA Endpoint Reference
API, and the reference wired to the Import, to invoke the remote service specified
by the endpoint in the message. After the service is invoked, a response is returned
to the POJO.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
Introduction
You can invoke services using Web service import bindings that are not directly
wired to your component, using endpoints that are not those specified in the
import bindings. For Web Service bindings, you can specify a dynamic endpoint
using a URL that conforms to the Web services URI standard.
Service Provider 1
Mediation Module 1
POJO Import
Service Provider 2
Figure 34. Illustration of endpoint override by dynamic invocation, with unwired import
You create a mediation module that includes the import selection and dynamic
endpoint using Integration Designer.
You can use the SCA public API to override the import name and endpoint
address. In the following example code, the uri value must be in a valid Web
services format. The import name identifies an import with a Web service binding
in the same SCA module.
epr = EndpointReferenceFactory.INSTANCE.createEndpointReference();
epr.setAddress(uri);
epr.setImport(importName);
Service dynamicService = (Service) ServiceManager.INSTANCE.getService(refname, epr);
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create Mediation Module 1, containing a POJO and Import that are not wired
together
2. Configure the Import to route messages to Export 2.
Dynamic invocation takes place when the POJO is invoked with Export 3
identified as the endpoint in the message. The POJO extracts the endpoint from the
message and identifies Export 3 as the endpoint, rather than the Export 2 endpoint
specified in the original deployment. The POJO uses the SCA Endpoint Reference
API, and the reference unwired to the Import, to invoke the remote service
specified by the endpoint in the message. After the service is invoked, a response
is returned to the POJO.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
The SMO structure enables overriding of the endpoint address, or uses a target
import as follows:
/headers/SMOHeader/Target/address
/headers/SMOHeader/Target/@bindingType
/headers/SMOHeader/Target/@import
Chapter 4. Import and export bindings 241
/headers/SMOHeader/AlternateTarget/address
/headers/SMOHeader/AlternateTarget/@bindingType
/headers/SMOHeader/AlternateTarget/@import
address
The address field contains the dynamic invocation target service URI for
requests, or the dynamic reply address URI for responses if a Web service
(JAX-WS) binding is being used.
@bindingType
When requests are being routed the bindingType field provides more
details about the URI, indicating the type of binding used during a
dynamic invocation.
@import
When requests are being routed the import field provides the name of a
target import to be used for dynamic invocation.
You can set each of these fields or leave them unchanged, depending on the
dynamic invocation you require. The fields can be inspected and set using
appropriate mediation primitives. For example, you can dynamically override the
endpoint address of a message with a service identified by a JMS-based URI, by
setting the following values for the Target/address and Target/@bindingType
fields:
Target/address:
jms:jndi:MyTargetQueueName?jndiConnectionFactoryName=MyConnectionFactoryName
Target/@bindingType
JMS
Set the Target/@import field to determine which import instance to use for the
service invocation. If you use the field /headers/SMOHeader/Target/@import, you
do not have to use the Custom mediation primitive, or create multiple references
and wires for each of the import bindings.
Introduction
You can invoke services by using endpoints that are different from those specified
in the import. For EIS bindings, you can specify a dynamic endpoint by using a
URL that represents the JNDI name of a connection factory.
Service Provider 2
Figure 35. Illustration of endpoint override by dynamic invocation using SMO, with wired
import
You create a mediation module that includes the dynamic endpoint using
Integration Designer.
scheme
The scheme for JCA URI is always jca.
jca-variant
The jca-variant provides more information about the JCA connection and is
always jndi.
jca-connectionFactory
This identifies the JCA connection factory providing the dynamic target
address for a message. Parameters can be added to help specify the target
address returned by the connection factory.
jca:jndi:SAPConn
This URI tells the EIS Import handler to look for a connection factory defined in
JNDI as SAPConn.
The EIS binding uses the connection factory to obtain the target address of the
dynamic endpoint. If a target address is found, the message is sent there. If no
target address is found, the message is sent to the original endpoint.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create the Mediation Module, containing an Export, a mediation flow
component, and an EIS JDBC Import. The export is of any type.
Dynamic invocation takes place when the export is invoked with a message
containing routing criteria that resolve to Service Provider 2. The mediation flow
component identifies the routing criteria in the message. The mediation flow
component uses the Message Element Setter primitive to set the new target address
in the SMO header, using the incoming message content and routing criteria.
Alternatively, the mediation flow component might use the Business Object Map or
XSL Transformation primitives to set the new target address.
The callout uses information from the SMO to invoke Service Provider 2. Any
response is returned by the response flow to the caller of export.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
A runtime exception occurs if an invalid target address is set in the SMO header.
The exception is thrown by the import and returned in the response flow.
Introduction
You can invoke services using EIS import bindings that are not directly wired to
your component, using endpoints that are not those specified in the import
bindings. For EIS bindings, you can specify a dynamic endpoint using a URI that
represents the JNDI name of a connection factory.
Service Provider 2
Figure 36. Illustration of endpoint override by dynamic invocation using SMO, with an unwired
import
scheme
The scheme for JCA URI is always jca.
jca-variant
The jca-variant provides more information about the JCA connection and is
always jndi.
jca-connectionFactory
This identifies the JCA connection factory providing the dynamic target
address for a message. Parameters can be added to help specify the target
address returned by the connection factory.
jca:jndi:SAPConn
This URI tells the EIS Import handler to look for a connection factory defined in
JNDI as SAPConn.
The EIS binding uses the connection factory to obtain the target address of the
dynamic endpoint. If a target address is found, the message is sent there. If no
target address is found, the message is sent to the original endpoint.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create the Mediation Module, containing an Export, a mediation flow
component, and an unwired EIS JDBC Import. The export is of any type.
2. Connect the Mediation Module import to static Service Provider 1.
3. Create Service Provider 2.
4. Check that Service Provider 1 and Service Provider 2 have the same port type.
5. Check that the Import is configured to route messages to Service Provider 1.
6. Check that the callout node has dynamic endpoint invocation override enabled.
7. Deploy the modules to the server.
Dynamic invocation takes place when the export is invoked with a message
containing a target import and routing criteria that resolve to Service Provider 2.
The mediation flow component extracts the endpoint and target import from the
message and puts it into the SMO, using the Message Element Setter primitive.
The callout uses information from the SMO to invoke Service Provider 2. Any
response is returned by the response flow to the caller of export.
Introduction
You can invoke services by using endpoints that are different from those specified
in the import. For HTTP bindings, you can specify a dynamic endpoint using a
URL that conforms to the HTTP URI standard.
Service Provider 2
Figure 37. Illustration of endpoint override by dynamic invocation using SMO, with wired
import
You create a mediation module that includes the dynamic endpoint by performing
tasks in Integration Designer.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create Mediation Module, containing an Export, a mediation flow component,
and an HTTP Import. The export is of any type.
2. Connect the Mediation Module import to static Service Provider 1.
3. Create Service Provider 2.
4. Check that Service Provider 1 and Service Provider 2 have the same port type.
5. Check that the callout node has dynamic endpoint invocation override enabled.
6. Deploy the modules to the server.
Dynamic invocation takes place when the export is invoked with a message
containing routing criteria that resolve to Service Provider 2. The mediation flow
component identifies the routing criteria in the message. The mediation flow
component uses the Message Element Setter primitive to set the new target address
in the SMO, using the incoming message content and routing criteria. Alternatively,
the mediation flow component might use the Business Object Map or XSL
Transformation primitives to set the new target address.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
A runtime exception occurs if an invalid target address is set in the SMO header.
The exception is thrown by the import and returned in the response flow.
Introduction
You can invoke services using HTTP import bindings that are not directly wired to
your component, using endpoints that are not those specified in the import
bindings. For HTTP bindings, you can specify a dynamic endpoint using a URL
that conforms to the HTTP URI standard.
Service Provider 2
Figure 38. Illustration of endpoint override by dynamic invocation using SMO, with an unwired
import
You create a mediation module that includes the import selection and dynamic
endpoint using Integration Designer.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create the Mediation Module, containing an Export of any type, a mediation
flow component, and an unwired HTTP Import.
2. Connect the Mediation Module import to static Service Provider 1.
Dynamic invocation takes place when the export is invoked with a message
containing a target import and routing criteria that resolve to Service Provider 2.
The mediation flow component extracts the endpoint from the message and puts it
into the SMO, using a Message Element Setter primitive. The mediation flow
component extracts the target import from the message and puts it into the SMO,
using a Message Element Setter primitive. The callout uses information from the
SMO to invoke Service Provider 2. Any response is returned by the response flow
to the caller of the export.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
Introduction
You can invoke services by using endpoints that are different from those specified
in the import. For JMS bindings, you can specify a dynamic endpoint using a URL
that conforms to the JMS URI standard.
Service Provider 2
Figure 39. Illustration of endpoint override by dynamic invocation using SMO, with wired
import
You create a mediation module that includes the dynamic endpoint by performing
tasks in Integration Designer.
The JMS endpoint used in the dynamic invocation is structured according to the
JMS URI standard.
In summary, the standard requires that JMS URIs have the form:
scheme
The scheme for a JMS URI will always be jms.
jms-variant
The jms-variant provides more information about the JMS connection (for
example by using the variant jndi).
jms-dest
This identifies the JMS destination object and should correspond to the
jms-variant.
parameter
Parameter is a key value pair separated by "=". The only key supported is
"jndiConnectionFactoryName". The value of this key should be the JNDI
name of the connection factory. Usage of this parameter is optional.
Managing security
The JMS Connection Factory uses application-managed security. It does not use
container-managed security. This means that you must set the component-managed
authentication alias.
The input name for the send destination, and the connection factory, must already
be defined in the server.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create Mediation Module 1, containing an Export, a mediation flow
component, and a JMS Import. The export is of any type.
2. Connect the Mediation Module import to static Service Provider 1.
3. Create Service Provider 2.
4. Check that Service Provider 1 and Service Provider 2 have the same port type.
5. Check that the callout node has dynamic endpoint invocation override enabled.
6. Deploy the modules to the server.
Dynamic invocation takes place when the export is invoked with a message
containing routing criteria that resolve to Service Provider 2. The mediation flow
component identifies the routing criteria in the message. The mediation flow
component uses the Message Element Setter primitive to set the new target address
in the SMO, using the incoming message content and routing criteria. Alternatively,
the mediation flow component might use the Business Object Map or XSL
Transformation primitives to set the new target address. The callout uses
information from the SMO to invoke Service Provider 2. Any response is returned
by the response flow to the caller of export.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
A runtime exception occurs if an invalid target address is set in the SMO header.
The exception is thrown by the import and returned in the response flow.
Introduction
You can invoke services using JMS import bindings that are not directly wired to
your component, using endpoints that are not those specified in the import
bindings. For JMS bindings, you can specify a dynamic endpoint using a URI that
conforms to the JMS URI standard.
Service Provider 2
Figure 40. Illustration of endpoint override by dynamic invocation using SMO, with an unwired
import
You create a mediation module that includes the import selection and dynamic
endpoint using Integration Designer.
The JMS endpoint used in the dynamic invocation is structured according to the
JMS URI standard.
In summary, the standard requires that JMS URIs have the form:
scheme
The scheme for a JMS URI will always be jms.
jms-variant
The jms-variant provides more information about the JMS connection (for
example by using the variant jndi).
jms-dest
This identifies the JMS destination object and should correspond to the
jms-variant.
parameter
Parameter is a key value pair separated by "=". The only key supported is
"jndiConnectionFactoryName". The value of this key should be the JNDI
name of the connection factory. Usage of this parameter is optional.
Managing security
The JMS Connection Factory uses application-managed security. It does not use
container-managed security. This means that you must set the component-managed
authentication alias.
The input name for the send destination, and the connection factory, must already
be defined in the server.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create the Mediation Module, containing an Export, a mediation flow
component, and an unwired JMS Import. The export is of any type.
2. Connect the Mediation Module import to static Service Provider 1.
3. Create Service Provider 2.
4. Check that Service Provider 1 and Service Provider 2 have the same port type.
5. Check that the Import is configured to route messages to Service Provider 1.
6. Check that the callout node has dynamic endpoint invocation override enabled.
7. Deploy the modules to the server.
Dynamic invocation takes place when the export is invoked with a message
containing a target import and routing criteria that resolve to Service Provider 2.
The mediation flow component extracts the endpoint from the message and puts it
into the SMO, using a message element setting primitive. The mediation flow
component extracts the target import from the message and puts it into the SMO,
using a message element setting primitive. The callout uses information from the
SMO to invoke Service Provider 2. Any response is returned by the response flow
to the caller of export.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
Introduction
You can invoke services using endpoints that are different from those specified in
the import. For MQ bindings, you can specify a dynamic endpoint using a URI
that conforms to an MQ URI standard.
Service Provider 2
Figure 41. Illustration of endpoint override by dynamic invocation using SMO, with wired
import
You create a mediation module that includes the dynamic endpoint by performing
tasks in Integration Designer.
For dynamic invocation, the MQ endpoint must have a URI with the form:
In each URI, the queueName and optional destination queue manager qmgr override
the destination queue specified on the import binding. The hostname, port, and
connectQueueManager override the connection information on the import binding.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create the Mediation Module, containing an Export, a mediation flow
component, and an MQ Import. The export is of any type.
2. Connect the Mediation Module import to static Service Provider 1.
3. Create Service Provider 2.
4. Check that Service Provider 1 and Service Provider 2 have the same port type.
5. Check that the callout node has dynamic endpoint invocation override enabled.
6. Deploy the modules to the server.
Dynamic invocation takes place when the export is invoked with a message
containing routing criteria that resolve to Service Provider 2. The mediation flow
component identifies the routing criteria in the message. The mediation flow
component uses the Message Element Setter primitive to set the new target address
in the SMO, using the incoming message content and routing criteria. Alternatively,
the mediation flow component might use the Business Object Map or XSL
Transformation primitives to set the new target address.
The callout uses information from the SMO to invoke Service Provider 2. Any
response is returned by the response flow to the caller of export.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
A runtime exception occurs if an invalid target address is set in the SMO header.
The exception is thrown by the import and returned in the response flow.
Introduction
You can invoke services using MQ import bindings that are not directly wired to
your component, using endpoints that are not those specified in the import
bindings. For MQ bindings, you can specify a dynamic endpoint using a URI that
conforms to an MQ URI standard.
Service Provider 2
Figure 42. Illustration of endpoint override by dynamic invocation using SMO, with unwired
import
You create a mediation module that includes the import selection and dynamic
endpoint using Integration Designer.
In each URI, the queueName and optional destination queue manager qmgr override
the destination queue specified on the import binding. The hostname, port, and
connectQueueManager override the connection information on the import binding.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create the Mediation Module, containing an Export, a mediation flow
component, and an unwired MQ Import. The export is of any type.
2. Connect the Mediation Module import to static Service Provider 1.
3. Create Service Provider 2.
4. Check that Service Provider 1 and Service Provider 2 have the same port type.
5. Check that the Import is configured to route messages to Service Provider 1.
6. Check that the callout node has dynamic endpoint invocation override enabled.
7. Deploy the modules to the server.
Dynamic invocation takes place when the export is invoked with a message
containing a target import and routing criteria that resolve to Service Provider 2.
The mediation flow component extracts the endpoint and the target import from
the message, and puts them into the SMO, using the Message Element Setter
primitive.
The callout uses information from the SMO to invoke Service Provider 2. Any
response is returned by the response flow to the caller of export.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
You can invoke services using SCA import bindings that are not directly wired to
your component, using endpoints that are not those specified in the import
bindings.
Service Provider 2
Figure 43. Illustration of endpoint override by dynamic invocation using SMO, with unwired
import
You create a mediation module that includes the import selection and dynamic
endpoint using Integration Designer.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create the Mediation Module, containing an Export, a mediation flow
component, and an unwired SCA Import. The export is of any type.
2. Connect the Mediation Module import to static Service Provider 1.
3. Create Service Provider 2.
4. Check that Service Provider 1 and Service Provider 2 have the same port type.
5. Check that the Import is configured to route messages to Service Provider 1.
6. Check that the callout node has dynamic endpoint invocation override enabled.
7. Deploy the modules to the server.
Dynamic invocation takes place when the export is invoked with a message
containing a target import and routing criteria that resolve to Service Provider 2.
The mediation flow component extracts the endpoint and the target import from
the message and puts them into the SMO, using the Message Element Setter
primitive.
The callout uses information from the SMO to invoke Service Provider 2. Any
response is returned by the response flow to the caller of the export.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
How to enable dynamic invocation of an endpoint with a wired import with Web
service binding, using SMO.
Introduction
You can invoke services by using endpoints that are different from those specified
in the import. For Web service bindings, a dynamic endpoint can be specified
using a URL that conforms to the Web services URI standard.
Service Provider 2
Figure 44. Illustration of endpoint override by dynamic invocation using SMO, with wired
import
You create a mediation module that includes the dynamic endpoint by performing
tasks in Integration Designer.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create mediation module 1, containing an export, a mediation flow component,
and a Web service import. The export is of any type.
2. Connect the mediation module import to static Service Provider 1.
3. Create Service Provider 2.
4. Check that Service Provider 1 and Service Provider 2 have the same port type.
5. Check that the callout node has dynamic endpoint invocation override enabled.
6. Deploy the modules to the server.
Dynamic invocation takes place when the export is invoked with a message
containing routing criteria that resolve to Service Provider 2. The mediation flow
component identifies the routing criteria in the message. The mediation flow
component uses the Message Element Setter primitive to set the new target address
in the SMO, using the incoming message content and routing criteria. Alternatively,
the mediation flow component might use the Business Object Map or XSL
Transformation primitives to set the new target address. The callout uses
information from the SMO to invoke Service Provider 2. Any response is returned
by the response flow to the caller of export.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
Introduction
You can invoke services using Web service import bindings that are not directly
wired to your component, using endpoints that are not those specified in the
import bindings. For Web Service bindings, you can specify a dynamic endpoint
using a URL that conforms to the Web services URI standard. The incoming
message is examined to find the name of a target import to use.
Service Provider 2
Figure 45. Illustration of endpoint override by dynamic invocation using SMO, with an unwired
import
You create a mediation module that includes the import selection and dynamic
endpoint using Integration Designer.
To create a mediation module that includes the dynamic endpoint, perform the
following tasks:
1. Create mediation module 1, containing an export of any type, a mediation flow
component, and an unwired import.
2. Connect the mediation module import to static Service Provider 1.
3. Create Service Provider 2.
4. Check that Service Provider 1 and Service Provider 2 have the same port type.
5. Check that the import is configured to route messages to Service Provider 1.
6. Check that the callout node has dynamic endpoint invocation override enabled.
7. Deploy the modules to the server.
For Web service invocation, the endpoint URL is identified as normal. For example,
an http: or jms: URL prefix indicates that a Web service will be invoked. Where
the import is unwired, the message contains a target import and routing criteria
that resolve to the intended export. The mediation flow component extracts the
endpoint and target import from the message, and puts it into the SMO, using the
Message Element Setter primitive.
For example, in Figure 45 on page 258 the callout uses information from the SMO
to invoke Service Provider 2. Any response is returned by the response flow to the
caller of export.
A one-way invocation message works the same way as a two-way message, except
that no response message is returned.
How to enable dynamic response redirection with an export with a Web service
(JAX-WS) binding, using SMO.
Introduction
Export MFC
Client applications
You create a mediation module that includes the dynamic redirected response
endpoint, by performing tasks in Integration Designer.
Dynamic response redirection takes place when the mediation module determines,
as part of request or response processing, that the original reply address is no
longer valid and the response must be sent to a different endpoint. The mediation
flow component identifies the routing criteria in the message. The mediation flow
component uses the Message Element Setter mediation primitive to set the new
redirected response target address in the SMO in the response flow. Alternatively
the mediation flow component might use the Business Object Map or XSL
Transformation primitives to set the new redirected response target address. The
input response uses the target address from the SMO to route the response to
client application 2.
Introduction
Dynamic invocation with a target import allows the configuration and protocol
used to invoke a service to be dynamically selected at run time by identifying a
supported import binding. The import and its binding must already be available
within the module and is selected at run time according to information contained
in the message.
You can use Integration Designer to create a mediation module that selects the
target service dynamically at run time. The target services might use different
protocols, different formats or different quality-of-service values. Each combination
must be known at the time the mediation module is developed. This means that
for each combination of protocol, format, and quality-of-service value, Integration
Designer includes an import in the mediation module with an appropriate
configuration.
For example, one MQ queue accesses one target service and a JMS queue accesses
another target service. A quality-of-service example might be where one import
uses security, but another import does not. The choice between the target service
destinations is made dynamically at run time using available metadata. Imports for
both the services would have to be included in the mediation module by
Integration Designer.
Import
A
Import
B
Wire
Invocation using dynamic endpoint
Invocation using static endpoint
A target import name always takes precedence over an existing wired import
within the mediation module, even if the target URI is compatible with the wired
import binding but not the target import binding.
You can use the target import to store additional metadata such as combination of
protocol, format, and quality-of-service settings, that you require to make a remote
invocation. You must declare a target import and package it with the mediation
module. The name of the import can be stored in any easily accessed form, such as
Introduction
Web services and SCA URLs are supported, but other types are not because of
their need for additional configuration that is provided by an import binding.
The default for pure dynamic invocation using a Web service binding is SOAP 1.1,
using HTTP or JMS as specified by the target URL. Pure dynamic invocation is
also supported for SOAP 1.2 with HTTP.
v For pure dynamic invocation using SOAP 1.1 and HTTP or JMS, the
EndpointReference binding type should either be unset or it should be set to one
of the following values:
– EndpointReference.BINDING_TYPE_WEB_SERVICE
– EndpointReference.BINDING_TYPE_WEB_SERVICE_SOAP_1_1.
v For pure dynamic invocation using SOAP 1.2 and HTTP, the EndpointReference
binding type should be set to
EndpointReference.BINDING_TYPE_WEB_SERVICE_SOAP_1_2.
Pure dynamic invocation is not supported for SOAP 1.2 with JMS.
You can use Integration Designer to create a mediation module that selects the
target service dynamically at runtime. The possible target services may use
different protocols, different formats, or different quality-of-service values. The
problem is that some or all the possible combinations are not known at the time
the module is developed. Similarly, during initial development and deployment of
the mediation module, the set of possible targets is known, but over time
additional endpoints are added.
Using pure dynamic invocation, a mediation module invoke service endpoints that
were not known when the module was developed and deployed. Service endpoint
information can be stored in any easily accessed form, such as WSRR or a
database. The endpoints can optionally include metadata that describes how the
service is to be invoked. This means that the mediation module can retrieve all the
information necessary to invoke the service endpoint.
Some applications must send messages to services that might not send a reply. For
example, some long-running services might not return a response quickly, or at all.
In this scenarios, an application cannot use the typical request and response
pattern within a mediation module because not every message will get a reply. The
application must be able to model this type of service interaction. When messages
are sent and received using JMS, it is easier to model the service interaction as a
pair of one-way messages, rather than as a two-way message operation.
Using Integration Designer, you can set the JMSReplyTo field on a method binding
for JMS imports, including Generic JMS and MQ JMS. This only applies for
one-way JMS methods. You cannot edit the JMSReplyTo field using Integration
Designer for two-way methods.
When setting the JMSReplyTo field, make sure the content is the JNDI name of a
JMS destination. At runtime, when a one-way JMS message is received, the
intended JMS import is inspected to find if a JMSReplyTo value is provided. If a
value is found, a JNDI lookup is performed, and the resulting JMS destination is
set within the JMSReplyTo field of the message. The message is then sent on to its
destination through the JMS import.
This flow of events applies for both Generic JMS bindings and MQ JMS bindings.
Note: This topic describes how to use JMS data bindings. When planning for data
transformation, consider the use of data handlers, which are protocol-neutral and
can be used across bindings. For information on data handlers, see Data handlers.
There are six predefined JMS data bindings, supplied as Java classes, to support
the JMS Message class and its five subtypes:
v JMSBaseDataBinding
v JMSTextDataBinding
v JMSBytesDataBinding
v JMSObjectDataBinding
v JMSStreamDataBinding
v JMSMapDataBinding
These data bindings are general purpose and support any message body. For Text
and Bytes messages, the bindings treat the payload as unstructured data and
transfer it as a whole into the corresponding SDO.
If the data is structured and you want to parse the data and map elements of it
into a structure within an SDO, you must code your own JMS data bindings and
SDO definitions. You also need to do this for an Object Message if you want to
map elements of the Object, rather than the whole Object, into an SDO. A
user-defined, custom JMS data binding can be used to both read and write JMS
messages, and must implement the
com.ibm.websphere.sca.jms.data.JMSDataBinding interface.
Introduction
Creating a custom JMS data binding involves creating a library and a Java project.
The library is to contain a Business Object representing the data to be mapped and
the Java project is to contain a custom JMS data binding class. If you create a
Creating a library
1. Create a library. The library is to contain a Business Object and associated
interface.
2. Within the library, create a Business Object representing the data to be mapped
from, or to, the JMS message.
3. Within the library, create a one-way or two-way interface containing the
Business Object.
Note: You must ensure that the name of the complex type of Business
Object is the same in your program as in Integration Designer. You must
also ensure that the namespace used in your program is the same as the
namespace displayed by Integration Designer.
v getDataObject()
import com.ibm.websphere.sca.jms.data.JMSDataBinding;
import com.ibm.websphere.sca.sdo.DataFactory;
import commonj.connector.runtime.DataBindingException;
import commonj.sdo.DataObject;
/*
* MapMessage Format:
* Symbol (string)
* CompanyName (string)
* StockValue (double)
*
*/
/*
* Store the passed in DataObject, and retrieve the values for use when creating the message.
*/
public void setDataObject(DataObject jmsMapData) throws DataBindingException {
jmsData = jmsMapData;
symbol = (String) jmsData.get("Symbol");
companyName = (String) jmsData.get("CompanyName");
stockValue = jmsData.getDouble("StockValue");
}
/*
* Construct a message from the values previously set.
*/
public void write(javax.jms.Message message) throws javax.jms.JMSException {
MapMessage mm = (MapMessage) message;
mm.setString("Symbol", symbol);
mm.setString("CompanyName", companyName);
mm.setDouble("StockValue",stockValue);
mm.setBooleanProperty("IsBusinessException",businessException);
}
/*
* The method will be called when the message is received, and needs to be
* converted to a DataObject. The getDataObject method will be called to retrieve the
* business object.
*/
public void read(javax.jms.Message message) throws javax.jms.JMSException {
//Handle business exception
if(message.propertyExists("IsBusinessException")){
businessException = message.getBooleanProperty("IsBusinessException");
//If this is a business exception, then likely payload will be fault.
//Load fault data from message , set in jmsData and return
}
symbol = ((MapMessage) message).getString("Symbol");
companyName = ((MapMessage) message).getString("CompanyName");
stockValue = ((MapMessage) message).getDouble("StockValue");
/*
* Create the data object from the DataFactory. The Business object can be
* determined from the export details view. The export specifies the Interface
* and operation, and from the definition of the operation, the expected input/output
* type can be seen.
*
* The first parameter is the namespace of the operation input type, and the
* second parameter is the name of the type.
*/
jmsData = DataFactory.INSTANCE.create("http://TradingDeskLibrary","TradingDeskBO");
/*
* The DataObject in this case has been defined with 2 string fields, and 1 double
* field.
*
* These fields can now be populated using the set methods.
Example XSD
<?xml version="1.0" encoding="UTF-8"?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="http://TradingDeskLibrary">
<xsd:complexType name="TradingDeskBO">
<xsd:sequence>
<xsd:element minOccurs="0" name="Symbol" type="xsd:string"/>
<xsd:element minOccurs="0" name="CompanyName" type="xsd:string"/>
<xsd:element minOccurs="0" name="StockValue" type="xsd:double"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>
Example WSDL
<?xml version="1.0" encoding="UTF-8"?>
<wsdl:definitions xmlns:bons1="http://TradingDeskLibrary"
xmlns:tns="http://TradingDeskLibrary/TradingDeskInterface"
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
name="TradingDeskInterface"
targetNamespace="http://TradingDeskLibrary/TradingDeskInterface">
<wsdl:types>
<xsd:schema targetNamespace="http://TradingDeskLibrary/TradingDeskInterface"
xmlns:bons1="http://TradingDeskLibrary"
xmlns:tns="http://TradingDeskLibrary/TradingDeskInterface"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<xsd:import namespace="http://TradingDeskLibrary" schemaLocation="TradingDeskBO.xsd"/>
<xsd:element name="TradingDeskOperation">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="input1" nillable="true" type="bons1:TradingDeskBO"/>
The exports and imports at the edges of your modules are responsible for
converting native data to data objects and vice versa. Exports and imports contain
data handlers and data bindings for this purpose.
To transform native data in your imports and exports to a variety of data formats,
you can use a prepackaged data handler or data binding, you can write your own
data binding, or you can use the WebSphere Transformation Extender data handler
to convert native data to data object, and vice versa, at the edges of your modules.
For WebSphere ESB to work with WebSphere Transformation Extender, you must
use the WebSphere Transformation Extender installer for WebSphere ESB.
WTX
Data
Handler
Figure 48. An export configured to use the WTX data handler
The client delivers the data to the export in some native format. The export then
passes this data in the same native format to WebSphere Transformation Extender
by way of the WebSphere Transformation Extender Data Handler. WebSphere
Transformation Extender converts the data to data object format and returns it to
the export. The export passes the data object on to the relevant SCA component.
Data Native
object data
SCA
Import Client
component
WTX
Data
Handler
The SCA component sends a data object to the import. The import passes this data
object to WebSphere Transformation Extender by way of the WebSphere
Transformation Extender Data Handler. WebSphere Transformation Extender
converts the information to the native format of the client, and this information is
returned to the import. The import sends the native data to the client.
Note: For more information about using WebSphere Transformation Extender, see
the WebSphere Transformation Extender product library.
Supported platforms
The following imports and exports support the use of WebSphere Transformation
Extender for data transform:
v JMS
v Generic JMS
v WebSphere MQ JMS
v Native MQ (body data binding only)
v EIS Flat File
v EIS FTP
v EIS Email
v HTTP
The WebSphere Transformation Extender data handler is applicable to all imports
and exports listed above. These data handlers can be customized. See the
Integration Designer documentation for more information.
On the export side, the WebSphere Transformation Extender data handler invokes
a WebSphere Transformation Extender map to transform native data to XML. The
On the import end, the data handler serializes a data object to XML and feeds it to
a WebSphere Transformation Extender map. The map converts it to native data,
which is then published. An export also uses the data handler in this way on the
response message.
For more information on creating map naming conventions, usage of maps in SCA
modules and configuring the WebSphere Transformation Extender data handler,
refer to the Integration Designer documentation.
You must install WebSphere ESB before you install the WebSphere Transformation
Extender for WebSphere ESB. You must have a valid license for WebSphere
Transformation Extender. WebSphere Transformation Extender is an independent
product and has to be installed separately.
For your server to work with WebSphere Transformation Extender, you must use
the WebSphere Transformation Extender installer for WebSphere ESB. In addition
to installing WebSphere Transformation Extender, this process also installs the
WebSphere Transformation Extender Java client libraries as an OSGi bundle in the
WebSphere ESB product so that it is accessible to WebSphere ESB.
Note: You must perform the first step on every node where you will utilize the
WebSphere Transformation Extender data handler. The second step is required only
on nodes where you want to create and edit maps.
The WebSphere Transformation Extender data handler has the following memory
requirements:
274 IBM WebSphere ESB: Reference
v For transforming from native data to business object, the memory required is at
least twice the size of the native data plus twice the size of the serialized
business object.
v For transforming from business object to native data, the memory required is at
least twice the size of the serialized business object plus twice the size of native
data.
The WebSphere Transformation Extender data handler is an ideal choice when you
have non-XML data entering, or leaving, your WebSphere ESB environment. For
XML data, you should use the XML data handler for JMS, WebSphere MQ, and
HTTP imports and exports, and use the XMLDataHandler for EIS bindings.
However, you might want to invoke a different map depending on the incoming
customer object. Such a scenario cannot be configured in the properties of the data
handler and you must instead use the data binding descriptor.
You must put the WebSphere Transformation Extender data binding descriptor in
the following locations, depending on what type of import or export you are
employing:
However, you might want to invoke a different map depending on the customer.
In this case the operation is invoked with customer and uses a different map for
every invocation. Such a scenario cannot be configured in the properties of the
data binding and you must instead use the data binding descriptor.
Syntax
The WebSphere Transformation Extender data binding descriptor has the following
syntax
databinding://domain/property?queryParameters
domain is WTX for this data binding.
property has the value map in this case.
queryParameters is either name=mapname where mapname is the name of the map
required, or businessObject=Customer&contentType=format, where Customer is
the name of the Business Object and format is the format of the data stream (for
example, COBOL or EDI).
Sample
Note: A directory called WTX must be in the top level of the module and must
contain any maps that are required by this binding.
databinding://WTX/map?businessObject=Customer&contentType=COBOL
databinding://WTX/map?contentType=COBOL
Use the WebSphere Transformation Extender JMS data binding for JMS, Generic
JMS, and MQ JMS imports.
You can set the WebSphere Transformation Extender data binding descriptor on
both the response and request messages for a JMS import.
Set the data binding descriptor on the response message for a JMS import when
you want to configure the messages individually.
Set the data binding descriptor on the request message using the mediation flow
component for a JMS import when you want to configure the messages
individually.
Procedure
1. Configure the request message using the mediation flow component.
2. Configure the response message. There are two ways to configure the response:
Option Example
Set the JMSType in the jmsMessage.setJMSType("databinding:
incoming message //WTX/map?businessObject=Customer&contentType=COBOL");
These properties are set when the client sends a message to WebSphere ESB.
Use the WebSphere Transformation Extender JMS data binding for JMS, Generic
JMS, and MQ JMS exports.
You can set the WebSphere Transformation Extender data binding descriptor on
both the response and request messages for a JMS import.
Set the data binding descriptor on the response message for a JMS export when
you want to configure the messages individually.
Set the data binding descriptor on the request message using the mediation flow
component for a JMS export when you want to configure the messages
individually.
Procedure
1. Configure the request message. There are two ways to configure the request
message:
Option Example
Set the JMSType in the jmsMessage.setJMSType("databinding:
incoming message //WTX/map?businessObject=Customer&contentType=COBOL");
You can set the WebSphere Transformation Extender data binding descriptor on
both the response and request messages for a native MQ Import.
Set the data binding descriptor on the response message for a native MQ import
when you want to configure the messages individually.
Set the data binding descriptor on the request message using the mediation flow
component for a native MQ import when you want to configure the messages
individually.
Results
These properties are set when the MQ client sends a message to WebSphere ESB.
You can set the WebSphere Transformation Extender data binding descriptor on
both the response and request messages for a native MQ export.
Set the data binding descriptor on the request message for a native MQ export
when you want to configure the messages individually.
Set the data binding descriptor on the response message using the mediation flow
component for a native MQ export when you want to configure the messages
individually.
Procedure
1. Configure the request message. Set the DataBindingDescriptor in the MQRFH2
header of the incoming message
2. Configure the response message using a mediation flow component.
a. Create a mediation flow component.
b. Set the RFH2 header in the Message Element Setter primitive.
Results
These properties are set when the MQ client sends a message to WebSphere ESB.
Use the WebSphere Transformation Extender HTTP data binding for HTTP
imports.
You can set the WebSphere Transformation Extender data binding descriptor on
both the response and request messages for an HTTP import.
Set the data binding descriptor on the response message for an HTTP import when
you want to configure the messages individually.
Set the data binding descriptor on the request message using the mediation flow
component for an HTTP import when you want to configure the messages
individually.
Procedure
1. Configure the request message. Set a custom header called
DataBindingDescriptor for this import in the mediation flow component.
2. Configure the response message.
Create a custom HTTP header called DataBindingDescriptor in the HTTP
import. For example:
databinding://WTX/map?businessObject=Customer&contentType=EDI
Use the WebSphere Transformation Extender data binding for HTTP exports.
You can set the WebSphere Transformation Extender data binding descriptor on
both the response and request messages for an HTTP export.
The HTTP client sending requests to the HTTP export sets the data binding
descriptor in the request message header or on the URL.
Set the data binding descriptor on the request message for an HTTP export when
you want to configure the messages individually.
Set the data binding descriptor on the response message using the mediation flow
component for an HTTP export when you want to configure the messages
individually.
Procedure
1. Configure the request message. There are two ways to set the data binding
descriptor in the request message:
Option Example
Create a custom HTTP header called databinding://WTX/
DataBindingDescriptor in the HTTP export. map?businessObject=Customer&
contentType =EDI
Add additional query parameters on the http://www.ibm.com/Export1/
URL. map?businessObject=Customer
&contentType=EDI
These commands add to those provided for the underlying WebSphere Application
Server and its service integration technologies.
To read the syntax diagrams, start at the top leftmost line and read from left to
right and from top to bottom. The horizontal line represents the main path of
parameters that you use when you enter the commands. Parameters that are off
the main path are optional and may have a default, if you omit them.
Symbol Description
>> Marks the beginning of the command syntax.
> Indicates that the command syntax is continued.
| Marks the beginning and end of a fragment or part of the
command syntax.
>< Marks the end of the command syntax.
You must include all punctuation such as colons, semicolons, commas, quotation
marks, and minus signs that are shown in the syntax diagram.
Parameters
Single-line command
In the following example, the USER command is a keyword. The required variable
parameter is user_id, and the optional variable parameter is password. Replace the
variable parameters with your own values.
USER user_id
password
If a diagram is longer than one line, the first line ends with a single arrowhead
and the second line begins with a single arrowhead.
The first line of a syntax diagram that is longer than one line
Choices
Choice_1
Choice_2
Option_1
Option_2
Non-alphanumeric characters
You must enter all non-alphanumeric characters as shown in the syntax diagram.
For example, if you must enter OPERAND=(001,0.001) the syntax diagram would
display:
You must enter all blank spaces shown in syntax diagrams. For example, to enter
OPERAND=(001 FIXED) the syntax diagram would display:
Default parameters
Syntax diagrams display default parameters and their values above the main path.
The system uses the displayed value if you omit the parameter.
Parameter=default
Syntax fragments
Some diagrams contain syntax fragments, which serve to break up diagrams that
are too long, too complex, or too repetitious. Syntax fragment names are in mixed
case and are shown in the diagram and in the heading of the fragment. The
fragment is placed below the main diagram.
Command Operands
Operands:
The following table lists, by component, the administrative console actions that
have command assistance.
Table 33. Available command assistance
Component Action
Business Rules v Install the business rules manager
– configBusinessRulesManager
The first profile that you create within one installation of WebSphere ESB is the
default profile. The default profile is the default target for commands issued from
the bin directory in the directory where WebSphere ESB is installed. If only one
profile exists on a system, every command operates on that profile.
To target a command to a profile other than the default, you must issue the
command as follows:
v If you want to issue the command from any directory, follow the command with
the -profileName attribute and the fully qualified path to the profile to address.
For example:
<Install_DIR>/bin/startServer server1 -profileName <Install_DIR>/profiles/<profilename>
v To overcome having to specify the -profileName attribute for a command, use
the version of the command that exists in the bin directory of the profile to
address. The directory is one of the following based on platform:
– Linux UNIX profile_root/bin
– Windows profile_root\bin
Where profile_root is <Install_DIR>/profiles/<profilename>
Command-line utilities
Look up a command by its name to find detailed syntax and usage of the
command.
To open the information center table of contents to the location of this reference
information, click the Show in Table of Contents button ( ) on your information
center border.
Synopsis
Description
Examples
Problem determination
Errors are logged in the snapshot directory in a file whose name begins with
BPMCreateDatabaseUpgradeUtilities and ends with .error.
Synopsis
BPMCreateRemoteMigrationUtilities archive_file_name
Description
The BPMCreateRemoteMigrationUtilities command creates the archive file
specified by the archive_file_name parameter containing all of the commands and
their prerequisites that need to be invoked on the system containing the source
profile to be migrated. The BPMCreateRemoteMigrationUtilities command is
located in the INSTALL_ROOT/bin directory.
Examples
Problem determination
Errors are logged in the snapshot directory in a file whose name begins with
BPMCreateRemoteMigrationUtilities and ends with .error.
Synopsis
Description
Restriction: If you are migrating from WebSphere ESB V6.0.2.x, you must use the
Profile Management tool to create the target profile..
The target profile is created with the same cell name, node name and host name as
specified in the source profile.
A newly created migration target profile is not ready for use until the rest of the
migration procedure is complete. This includes migrating the source profile to the
target, migrating the cluster in a clustered managed node scenario, and completing
the schema and data upgrades.
Options
v -targetProfileName target_profile_name: Use this option to specify a target
profile name that is different than the source profile name.
v -targetProfileDirectory target_profile_directory: Use this option to
override the default location of the target profile directory.
v -remoteMigration true: Use this option if it is a remote migration scenario.
v -hostName hostname: Use this option to specify a specific hostname, used only
when its a remote migration scenario.
v -help: Use this option to see command usage.
Examples
Create a migration target profile using the profile named profile1 from the source
snapshot directory /MigrationSnapshots/ProcServer620:
v Linux UNIX BPMCreateTargetProfile.sh /MigrationSnapshots/
ProcServer620 profile1
v Windows BPMCreateTargetProfile.bat "c:\MigrationSnapshots\ProcServer620"
profile1
Create a migration target profile using the profile named profile1 from the source
snapshot directory /MigrationSnapshots/ProcServer620 while specifying a
non-default target profile name and non-default target profile directory:
Problem determination
Errors are logged in the snapshot directory in a file whose name begins with
BPMCreateTargetProfile and ends with .error.
If the BPMCreateTargetProfile command fails, check the logs, correct the issues,
delete the target profile using the manageprofiles command-line utility, and
re-invoke BPMCreateTargetProfile.
Synopsis
Description
The command generates SQL scripts and upgradeSchema scripts in the snapshot
directory with the following structure:
v Linux UNIX DBType/database_name.schema_name
v Windows DBType\database_name.schema_name
If the operating system on which the .zip file was created using the
BPMCreateDatabaseUpgradeUtilities command is different from the operating
system of the database server machine where the
BPMGenerateUpgradeSchemaScripts command is to be run, install JRE 1.6 and edit
the JAVA_HOME parameter in the BPMGenerateUpgradeSchemaScripts.bat or .sh shell
scripts to point to the JRE installation.
Once the SQL scripts have been generated, you can run them using one of these
methods:
v Use the upgradeSchema command that is located in the following directory:
– Linux UNIX DBType/database_name.schema_name
– Windows DBType\database_name.schema_name
v Open an SQL session and run the SQL scripts. For information about how to run
SQL scripts using an SQL session, refer to the topic "Executing SQL upgrade
scripts".
Options
-help
Examples
Generate SQL scripts and upgradeSchema scripts for the Common database:
v Linux UNIX BPMGenerateUpgradeSchemaScripts.sh WPRCSDB.COMMONDB
/MigrationSnapshots/ProcServer620
v Windows BPMGenerateUpgradeSchemaScripts.bat WPRCSDB.COMMONDB
c:\MigrationSnapshots\ProcServer620
Problem determination
Errors are logged in the snapshot directory in a file whose name begins with
BPMGenerateUpgradeSchemaScripts and ends with .error.
Synopsis
BPMMigrate [options]
Options
-trace [fine|finer|finest|all]
Use the -trace option to generate detailed trace information. The granularity of
the trace generated must be specified as fine, finer, finest, or all. By default, the
trace information is written to the target_install_root/migrate/logs/
BPMMigrate.timestamp.log file.
-traceFile file_name
Specify the file where the trace information is written.
The -trace option and the -traceFile option must be used together.
Examples
To run the wizard the most fine-grained trace activated: Linux UNIX
Windows
Synopsis
Description
Examples
Problem Determination
Errors are logged in the snapshot directory in a file whose name begins with
BPMMigrateCluster and ends with .error .
The BPMMigrateCluster command creates a log when it runs that is written to the
snapshot_directory/logs/BPMMigrateCluster- profilename.time_stamp.log file,
where snapshot_directory is the value specified for the snapshot_directory parameter.
You can view the BPMMigrateCluster-profilename.time_stamp.log file with a text
editor.
Check the log file for any errors and rerun the BPMMigrateCluster command if
needed.
Synopsis
Description
The BPMMigrateProfile command migrates the configuration files that were copied
from a source profile specified by the snapshot_directory parameter and the
source_profile_name parameter to a target profile created with the
BPMCreateTargetProfile command.
Options
-username source_profile_user_name
Use this option to pass the source profile's administrative user name if the
source profile has security configured.
-password source_profile_password
Use this option to pass the source profile's administrative password if the
source profile has security configured.
-targetProfileName target_profile_name
Use this option to identify the target profile name created by the
BPMCreateTargetProfile command if the name of the target profile is different
from that of the source profile.
-javaoption JVM_Heap_Size
Use this option to set the JVM heap size configuration for the migration
process. If the BPMMigrateProfile command fails with an OutOfMemory
exception because of an unusually large number of applications, you can use
this parameter to adjust the JVM heap size. The default value used by the
command is 768 MB. The minimum value that can be specified is 256.
-requestTimeout SOAP_timeout_value
Use this option to adjust the timeout configuration if the default value fails
and the migration fails with SOAP timeout errors. The default value is 80
seconds. This option is applicable only for federated profiles that connect to
the deployment manager during migration.
-basePort basePort
Use this option to define the base port number to be used for assigning new
ports during the migration process. The initial request for a port will use the
base port number, and each subsequent request will be assigned the next
consecutive port number.
-replacePorts [true | false]
Use this option to specify how to map port values for virtual hosts and Web
container transport ports.
By default, do not replace the version 6.2.0, 6.1.2, or 6.1.0 port definitions
during migration. The source version's configuration is left alone and no
channels are deleted. The following four named channels are set to values that
are equivalent to the values set for the source version:
v WC_adminhost
v WC_defaulthost
v WC_adminhost_secure
v WC_defaulthost_secure
This option is supported for stand-alone profile migration only.
Note: This is temporary until all of the nodes in the environment are at the
version 7.5 level. When they are all at the 7.5 level, perform the following
actions:
v 1. Modify your administration scripts to use all of the Version 7.5 settings.
v 2. Use the convertScriptCompatibility command to convert your
configurations to match all of the Version 7.5 settings. Refer to
convertScriptCompatibility command for more information.
-keepSourceDMgrEnabled [true | false]
This option is pertinent only to deployment manager profiles.
By default, when a source deployment manager profile is migrated to a target
deployment manager profile, the source deployment manager profile is
disabled. Disabling the source deployment manager by default ensures that the
source and target deployment managers are not started at the same time.
Caution: Use this parameter as true with care.WebSphere Application Server
Version 6.x deployment manager configurations normally are stopped and
disabled to prevent multiple deployment managers from managing the same
nodes. If you do not stop the Version 6.x or Version 7.0.x deployment manager
before you start using the Version 7.5 deployment manager, port conflicts
might occur when the second instance of the deployment manager is started.
Specifying true for this parameter means that any configuration changes made
in the old configuration during migration might not be migrated.
-keepAppDirectory
This is an optional parameter that specifies whether to install all applications
to the same directories in which they are currently located. The default is
false. If this parameter is specified as true, each individual application retains
its location.
Restriction: If this parameter is specified as true, the location is shared by the
existing WebSphere ESB and the new installation. If you keep the migrated
applications in the same locations as those of the previous version, the
following restrictions apply:
v The version 7.5 mixed-node support limitations must be followed. This
means that the following support cannot be used when evoking the
wsadmin command:
– Precompile JSP
– Use Binary Configuration
– Deploy EJB
v You risk losing the migrated applications unintentionally if you later delete
applications from these locations when administering your previously
existing installation (for example, uninstalling it).
-appInstallDirectory user_specified_directory
Use this optional parameter to pass the directory name to use when installing
all applications during migration. The default of profile_name\installedApps is
Examples
Migrate the configuration for the source profile named profile1 copied to the
/MigrationSnapshots/ProcServer700 directory
v Linux UNIX BPMMigrateProfile.sh /MigrationSnapshots/ProcServer700
profile1
v Windows BPMMigrateProfile.bat c:\MigrationSnapshots\ProcServer700
profile1
Migrate the configuration for the source profile named profile1 that has security
turned on that is copied to the /MigrationSnapshots/ProcServer700 directory
v Linux UNIX BPMMigrateProfile.sh -username admin -password pword
/MigrationSnapshots/ProcServer700 profile1
v Windows BPMMigrateProfile.bat -username admin -password pword
c:\MigrationSnapshots\ProcServer700 profile1
Migrate the configuration for the source profile named profile1 that is copied to
the /MigrationSnapshots/ProcServer700 directory to a target profile that was
created with a different name of profile2.
v Linux UNIX BPMMigrateProfile.sh -targetProfileName profile2
/MigrationSnapshots/ProcServer700 profile1
v Windows BPMMigrateProfile.bat -targetProfileName profile2
c:\MigrationSnapshots\ProcServer700 profile1
Migrate the configuration for the source profile named profile1 copied to the
/MigrationSnapshots/ProcServer700 directory with trace turned on to the finest
level and specify a non-default trace log file.
v Linux UNIX BPMMigrateProfile.sh -trace finest -traceFile
migrateProfileTrace.log /MigrationSnapshots/ProcServer700
v Windows BPMMigrateProfile.bat -trace finest -traceFile
migrateProfileTrace.log c:\MigrationSnapshots\ProcServer700
Errors are logged in the snapshot directory in a file whose name begins with
BPMMigrateProfile and ends with .error .
To turn on trace, use the -trace option. By default, the trace information is written
to the snapshot_directory/logs/BPMMigrateProfile. timestamp.log file. To specify an
alternative trace file, use the -traceFile parameter.
The BPMMigrateProfile command displays status to the screen while running. This
command also saves a more extensive set of logging information in the
BPMMigrateProfile.profilename.timestamp.log file located in the
snapshot_directory/logs directory. You can view the
BPMMigrateProfile.profilename.timestamp.log file with a text editor.
If the profile migration fails, check the logs to identify the problem and fix it. You
can then rerun the BPMMigrateProfile command.
Synopsis
BPMMigrationStatus [options]
Description
The BPMMigrationStatus command displays the status of all the migrations that are
in progress or complete on the current system. Each migration is uniquely
identified by the source install root, the source profile name, the snapshot directory
of the source profile, the target install root, and the target profile name. The
BPMMigrationStatus command is located in the INSTALL_ROOT/bin directory.
Options
-clean
Remove all the log file traces of migrations from the current system.
Examples
Clean up and remove all prior migration status information from the current
system:
v Linux UNIX BPMMigrationStatus.sh -clean
v Windows BPMMigrationStatus.bat -clean
Edit the generated XML file to mark which WebSphere Adapter instances to
update to version 7.0.x during runtime migration.
Edit the value in <update> from “false” to “true” to update a specific WebSphere
Adapter instance to version 7.5. Additionally, copy the version 7.5 RAR file of the
WebSphere Adapter being marked for update into the following directory:
INSTALL_ROOT/installableApps.
Note: You should set the <update> to “true” for any application that embeds
WebSphere Adapters version 6.0.2 or previous versions of WebSphere Adapter for
SAP. You should set the <update> to true for any WebSphere Adapters version 6.0.2
or previous versions of WebSphere Adapter for SAP deployed at Node or Cluster
scope.
Synopsis
Description
All servers and node agents associated with the profile being extracted should be
stopped before invoking the BPMQueryDeploymentConfiguration command.
Options
-help
Provides the command usage.
Examples
The following example extracts the deployment configuration for profile 1 located
in the BPM620 source installation root directory to the directory
/MigrationSnapshots/ProcServer620/profiles/profile1.
v Linux UNIX BPMQueryDeploymentConfiguration.sh /opt/ibm/WebSphere/
ProcServer620 profile1 /MigrationSnapshots/ProcServer620
v Windows BPMQueryDeploymentConfiguration.bat "C:\Program
Files\IBM\WebSphere\ProcServer620" profile1 c:\MigrationSnapshots\
ProcServer620
Problem determination
Errors are logged in the snapshot directory in a file whose name begins with
BPMQueryDeploymentConfiguration and ends with .error .
Description
All servers and node agents associated with the profile being copied should be
stopped before invoking the BPMSnapshotSourceProfile command.
Options
-trace [fine|finer|finest|all]
Use the -trace option to generate detailed trace information. The granularity
of the trace generated must be specified as fine, finer, finest, or all. By
default, the trace information is written to the snapshot_directory/logs/
BPMSnapshotSourceProfile. timestamp.log file.
-traceFile file_name
Specify the file where the trace information is written.
-remoteMigration true
Use this option if it is a remote migration scenario.
-help
Provides the command usage.
Examples
Copy the configuration files for the profile named profile1 located in the BPM620
source installation root directory to the /MigrationSnapshots/ProcServer620
snapshot directory.
v Linux UNIX BPMSnapshotSourceProfile.sh /opt/ibm/WebSphere/
ProcServer620 profile1 /MigrationSnapshots/ProcServer620
v Windows BPMSnapshotSourceProfile.bat "C:\Program Files\IBM\WebSphere\
ProcServer620" profile1 c:\MigrationSnapshots\ProcServer620
Copy the configuration files for the profile named profile1 located in the
ProcServer620 source installation root directory to the /MigrationSnapshots/
ProcServer620 snapshot directory with trace turned on.
v Linux UNIX BPMSnapshotSourceProfile.sh -trace finest -traceFile
/snapshotTrace.log /opt/ibm/WebSphere/ProcServer620 profile1
/MigrationSnapshots/ProcServer620
v Windows BPMSnapshotSourceProfile.bat -trace finest -traceFile
/snapshotTrace.log "C:\Program Files\IBM\WebSphere\ProcServer620"
profile1 c:\MigrationSnapshots\ProcServer620
Errors are logged in the snapshot directory in a file whose name begins with
BPMSnapshotSourceProfile and ends with .error .
To turn on trace, use the -trace option. By default, the trace information is written
to the snapshot_directory/logs/BPMSnapshotSourceProfile.timestamp.trace file. To
specify an alternative trace file, use the -traceFile parameter.
Change the ulimit setting for the kernel in the bash shell profile script, which is
loaded at login time for the session. Set the ulimit on your Linux command shells
by adding the command to your shell profile script. The shell profile script is
usually found under your home directory. To set the ulimit to 8192, issue the
following commands:
cd ~
vi .bashrc
ulimit -n 8192
Note: In order to run the ulimit command, you must have root privileges.
If the profile copying fails, check the logs to identify the problem. Fix the problem
and rerun the BPMMSnapshotSourceProfile command.
The esAdmin command can list and delete all locks currently managed by the lock
manager. When listing locks, you can list all locks or a small subset that is filtered
based on the module, component, or method. This command can also be used to
release an active lock in a deadlock situation; after the lock is released, it is granted
to the next queued request.
Syntax
-help
listAll
listLocks moduleName deleteLocks moduleName
moduleName componentName
moduleName componentName methodName
unlock lockId
Notes:
1 If security is enabled, you must supply a user ID (and its associated
password) with sufficient authority to perform the changes. Enter the user ID
and password using the -username and -password parameters.
Required parameters
hostName
Specifies the name of the server where the lock manager is running. The value
must be a string. If no value is given, the default value localhost is used.
soapPortNumber
Specifies the port used for the connection to the server. The value must be an
integer. If no value is given, the default value 8880 is used.
username
Required if administrative security is enabled. Specifies the user ID of a user
with sufficient authority to process the changes. If no value is given and
security is enabled, you are prompted to provide a user ID and password.
password
Required if administrative security is enabled. Specifies the password
associated with the user ID specified in the -username variable. If no value is
given and security is enabled, you are prompted to provide a user ID and
password.
moduleName
Specifies the name of the module that contains the component using event
sequencing.
componentName
Specifies the name of the component that is using event sequencing.
methodName
Specifies the name of the method on which event sequencing qualifiers have
been set.
lockId
Specifies the numeric ID of the lock you want to release. The value for this
parameter must be an integer.
Example
The following command returns a list of active and queued locks for the Order
module:
esAdmin listLocks Order
The following command releases lock 754830988. This command assumes that
security is enabled and that the port number is 9060 (instead of the default 8880).
esAdmin -username administrator1 -password adminpassword -p 9060 unlock 754830988
Purpose
Description
Parameters
-status
Displays information about the current bucket configuration, including the
active bucket setting and the bucket check interval (the frequency with which
the data store plug-in checks to determine which bucket is active).
-change
Swaps the buckets so the active bucket becomes inactive and the inactive
bucket becomes active. The inactive bucket must be empty before you can use
this option.
Examples
Purpose
Description
The eventpurge command deletes events from the event database. You can delete
all events from the event database, or you can limit the deletion to events meeting
certain criteria.
Parameters
-seconds seconds
The minimum age of events you want deleted. The seconds value must be an
integer. Only events older than the specified number of seconds are deleted.
This parameter is required if you do not specify the -end parameter.
-end end_time
The end time of the group of events you want to delete. Only events generated
before the specified time are deleted. The end_time value must be specified in
the XML dateTime format (CCYY-MM-DDThh:mm:ss ). For example, noon on 1
January 2006 in Eastern Standard Time would be 2006-01-01T12:00:00-05:00.
For more information about the dateTime data type, refer to the XML schema
at www.w3.org.
This parameter is required if you do not specify the -seconds parameter.
-group eventGroup
The event group from which to delete events. The event_group value must be
the name of an event group defined in the Common Event Infrastructure
configuration. This parameter is optional.
-severity severity
The severity of the events you want deleted. The severity value must be an
integer; only events whose severity is equal to the value you specify are
deleted. This parameter is optional.
-extensionname extension_name
The extension name of the events you want included in the deletion. Use this
parameter to restrict the deletion to events of a specific type. Only events
whose extensionName property is equal to extensionName are deleted. This
parameter is optional.
-start start_time
The beginning time of the group of events you want to delete. Only events
generated after the specified time are deleted. The start_time value must be
specified in the XML dateTime format (CCYY-MM-DDThh:mm:ss ). This parameter
is optional.
Example
The following example deletes all events from the database whose severity is 20
(harmless) and were generated earlier than 10 minutes ago.
eventpurge -group "All events" -severity 20 -seconds 600
Purpose
Syntax
genMapper -javaOutput java_output_dir
Parameters
-javaOutput java_ouput_dir
The name of the directory where the system places the generated Java classes.
-scdlOutput sca_output_dir
The name of the directory where the system places the generated bridge
component.
-name java_class_file
The fully qualified file name that contains the Java class or interface for which
the bridge component is generated.
Restrictions
The input can contain a Java class, a stateless session bean, or a Java interface.
If the input is a Java class, the class can contain only one interface and the
interface cannot extend other interfaces.
Example
After the command completes, listing the directory c:\customer shows the
following files:
Customermapper.component
Customermapper.wsdl
Copy the files from both generated directories to the directory of any module that
requires the bridge component. You can then either import that bridge component
intoIBM Integration Designer and wire the reference and interface to the bridge
component or use the serviceDeploy command to create an enterprise archive
(EAR) file to deploy to the server.
Purpose
The default log file is the install_root/logs/installver.log file. You can redirect the
output using the -log parameter and an argument. Use the -log parameter
without the file argument to generate the default log file.
The utility parses the bill-of-materials file for each component to find the correct
checksum value for each file in the component. Each product file has an entry in
some bill-of-materials file. The entry for a product file lists the product file path
and the correct checksum value.
<componentfiles componentname="activity">
<file>
<relativepath>properties/version/activity.component</relativepath>
<checksum>1a20dc54694e81fccd16c80f7c1bb6b46bba8768</checksum>
<permissions>644</permissions>
<installoperation>remove</installoperation>
</file>
<file>
<relativepath>lib/activity.jar</relativepath>
<checksum>2f056cc01be7ff42bb343e962d26328d5332c88c</checksum>
<permissions>644</permissions>
<installoperation>remove</installoperation>
</file>
</componentfiles>
The installver_bpm command file is located in the bin directory of the installation
root directory:
v Linux UNIX install_root/bin/installver_bpm.sh
v Windows install_root\bin\installver_bpm.bat
Change directories to the bin directory to start the installver_bpm utility from the
command line. The utility runs on any supported operating system except for
z/OS. For example, use the following command to start the utility on a Linux
system or a UNIX system:
./installver_bpm.sh
Use the following command syntax to automatically check the bill of materials
against the installed file system.
v Linux UNIX install_root/bin/installver_bpm.sh
v Windows install_root\bin\installver_bpm.bat
Use the following syntax to create and compare an inventory of configured files to
the currently installed files.
Create an inventory list of the files that are currently installed in the installation
root directory:
v Linux UNIX ./installver_bpm.sh -createinventory [path/file_name],
such as ./installver_bpm.sh -createinventory /tmp/system.inv
v Windows installver_bpm.bat -createinventory [path\file_name], such as
installver_bpm.bat -createinventory C:\temp\system.inv
Compare the inventory list to files that are currently installed in the installation
root directory:
v Linux UNIX ./installver_bpm.sh -compare /path/file_name
v Windows installver_bpm.bat -compare path\file_name
After creating an inventory list, use the -compare parameter to compare the list
to the actual files that exist in the system at the time of the comparison.
-exclude file1;file2;file3;...
Excludes files from the comparison.
Use a semi-colon (;) or a colon (:) to delimit file names.
-help
Displays usage information.
-include file1;file2;file3; ...
Includes files in the comparison and excludes all other files.
Use a semi-colon (;) or a colon (:) to delimit file names.
-installroot directory_name
Overrides the default installation root directory.
-log [file_path_and_file_name_of_log_file]
Example
The following examples show issues that might occur when you run the
installver_bpm command-line utility to compare checksums.
Ignore entries for checksum mismatches that you introduce on purpose, such as
might occur when you extend a component
Some messages indicate deviations from the normally expected result, but are not
indicators of a serious issue:
I CWNVU0360I: [ivu] The following bill of materials issue is found for component
nullvaluesample:
Hash must not be null or an empty string.
Overlapped files are either a potential product issue or potential tampering with
the IBM provided bill of materials
I CWNVU0470I: [ivu] Starting to analyze: overlapbinarycomponentsample
W CWNVU0422W: [ivu] The following file is overlapped: lib/binaryTest.jar
W CWNVU0425W: [ivu] The overlap is caused by: _binarycomponentsample
I CWNVU0390I: [ivu] Component issues found : 1
I CWNVU0480I: [ivu] Done analyzing: overlapbinarycomponentsample
If you see any messages with the following format, contact IBM support:
W CWNVU0280W: [ivu] Component mismatch: expected ... but found ...
For current information available from IBM Support on known problems and their
resolution, see this IBM Support page.
IBM Support has documents that can save you time gathering information needed
to resolve this problem. Before opening a PMR, see this IBM Support page.
If you do not see a known installation problem that resembles yours, or if the
information provided does not solve your problem, contact IBM support for
further assistance.
After verifying your installation, you can create profiles or deploy an application
on an existing profile.
The installver_bpm command file is located in the bin directory of the installation
root directory:
v Linux UNIX install_root/bin/installver_bpm.sh
v Windows install_root\bin\installver_bpm.bat
Change directories to the bin directory to start the installver_bpm utility from the
command line.
To check the bill of materials against the installed file system, perform the
following steps.
Procedure
v To compare the checksum of product files to the correct checksum in the
bill-of-material files, type the following command:
– Linux UNIX install_root/bin/installver_bpm.sh
– Windows install_root\bin\installver_bpm.bat
v To compare checksums and display trace results, type the following command:
– Linux UNIX ./installver_bpm.sh -trace
– Windows installver_bpm.bat -trace
v To display information about how to use the installver_bpm command-line
utility, type the following command:
– Linux UNIX ./installver_bpm.sh -help
– Windows installver_bpm.bat -help
v To compare checksums and ignore the list of files to exclude, type the following
command:
Results
When you issue one of the checksum commands from the install_root/bin
directory, the status of the command is displayed on the terminal console.
The messages report the total number of issues found. If the issue count is zero, all
of the components exist and no problems exist. The installver_bpm utility logs the
results of the command to the install_root/logs/installver.log file if you use
the -log parameter without specifying a file name for the log.
You can redirect the output using the -log parameter and an argument. The
directory that you specify must already exist. For example: ./installver_bpm.sh
-log /tmp/waslogs/my_installver.log
Example
The following command produces this example, which shows the results of
comparing the installed product against the product bill of materials.
v Linux UNIX ./installver_bpm.sh
v Windows installver_bpm.bat
Install the product before comparing checksums and using exclusion properties.
The installver_bpm command file is located in the bin directory of the installation
root directory:
v Linux UNIX On Linux and UNIX platforms: install_root/bin/
installver_bpm.sh
v Windows On Windows platforms: install_root\bin\installver_bpm.bat
Change directories to the bin directory to start the installver_bpm utility from the
command line.
Procedure
v To exclude all the files within one or more components from the comparison,
type the following command:
– Linux UNIXOn Linux and UNIX platforms: ./installver_bpm.sh
-excludecomponent comp1;comp2;comp3;...
– Windows On Windows platforms: installver_bpm.bat -excludecomponent
comp1;comp2;comp3;...
Linux For example, you might exclude the prereq.wccm
UNIX
component to avoid known but acceptable issues in the component:
./installver_bpm.sh -log -excludecomponent prereq.wccm
The resulting messages show the exclusion:
I CWNVU0160I: [ivu] Verifying.
I CWNVU0170I: [ivu] The installation root directory is E:\WPS61\
I CWNVU0300I: [ivu] The total number of user excluded files found is 38.
I CWNVU0300I: [ivu] The total number of IBM excluded files found is 82.
I CWNVU0185I: [ivu] Searching component directory for file listing: files.list
I CWNVU0460I: [ivu] The utility is running.
I CWNVU0260I: [ivu] The total number of components found is: 441
I CWNVU0270I: [ivu] Gathering installation root data.
I CWNVU0290I: [ivu] Starting the verification for 439 components.
...
I CWNVU0400I: [ivu] Total issues found : 0
I CWNVU0340I: [ivu] Done.
v To exclude certain files from the comparison, type the following command:
– Linux UNIX On Linux and UNIX platforms: install_root/bin/
installver_bpm.sh -exclude fn1;fn2;fn3
– Windows On Windows platforms: install_root\bin\installver_bpm.bat
-exclude fn1;fn2;fn3
If the two files were in the comparison, they would be in the list and the count
would be 625, as in the previous example.
Tip: The highlighted line in the example is reserved for excluded files listed in
the user template file, as described in the next step. The highlighted line does
not count files that you list in the installver_bpm command line with the
-exclude parameter.
v Edit the ivu_user.template file to compare checksums and exclude certain files
from the comparison.
1. List files to exclude in the template file.
The ivu_user.template file is located in the properties directory of the
default profile, which, in this case, is a deployment manager profile.
Tip: Do not use quotation marks or double quotation marks to delimit a file
name.
3. Use the template file to exclude files from the comparison:
For example:
installver_bpm.bat -log
...
I CWNVU0430I: [ivu] The following file is missing:
web/configDocs/wssecurity/generator-binding.html
Results
When you run one of the checksum commands from the install_root/bin
directory, the status of the command is displayed on the terminal console or in a
log file.
You can use inclusion properties to specify individual files and components.
By default, IBM includes all files in the checksum comparison except for the IBM
excluded files. The displayed output will be similar to the following:
I CWNVU0160I: [ivu] Verifying.
I CWNVU0170I: [ivu] The installation root directory is E:\WPS61\
I CWNVU0300I: [ivu] The total number of user excluded files found are 0.
I CWNVU0300I: [ivu] The total number of IBM excluded files found are 82.
I CWNVU0185I: [ivu] Searching component directory for file listing: files.list
I CWNVU0460I: [ivu] The utility is running.
I CWNVU0260I: [ivu] The total number of components found is: 441
I CWNVU0270I: [ivu] Gathering installation root data.
I CWNVU0460I: [ivu] The utility is running.
I CWNVU0290I: [ivu] Starting the verification for 441 components.
...
Several methods are provided to include only certain files in the comparison.
The installver_bpm command file is located in the bin directory of the installation
root directory:
v Linux UNIX On Linux and UNIX platforms: install_root/bin/
installver_bpm.sh
v Windows On Windows platforms: install_root\bin\installver_bpm.bat
Change directories to the bin directory to start the installver_bpm utility from the
command line.
To compare specific file and component checksums, perform the following steps.
Results
When you issue one of the checksum commands from the install_root/bin
directory, the status of the command is displayed on the terminal console or in a
log file.
The profile defines the runtime environment and includes all of the files that the
server processes can change during runtime.
The command file is located in the install_root/bin directory. The command file
is a script named manageprofiles.sh for Linux and UNIX platforms or
manageprofiles.bat for Windows platforms.
The manageprofiles command-line utility creates a log for every profile that it
creates, deletes, or augments. The logs are in the following directory, depending on
platform:
v Linux UNIX install_root/logs/manageprofiles
v Windows install_root\logs\manageprofiles
Syntax
Note: Using profiles that have been unaugmented (-unaugment parameter) is not
supported.
v Deleting a profile (-delete parameter).
Follow the instructions in Deleting profiles using the manageprofiles
command-line utility.
v Deleting all profiles (-deleteAll parameter)
v Listing all profiles (-listProfiles parameter)
v Getting the name of an existing profile from its name (-getName parameter)
v Getting the name of an existing profile from its path (-getPath parameter)
For detailed help including the required parameters for each of the tasks
accomplished with the manageprofiles command-line utility, use the -help
parameter. The following is an example of using the help parameter with the
manageprofiles command-line utility -augment parameter on Windows operating
systems: manageprofiles.bat -augment -help. The output specifies which
parameters are required and which are optional.
Parameters
Depending on the operation that you want to perform with the manageprofiles
command-line utility, you might need to provide one or more of the parameters
described in manageprofiles parameters. The Profile Management Tool validates
that the required parameters are provided and the values entered for those
parameters are valid. Be sure to type the name of the parameters with the correct
case, because the command line does not validate the case of the parameter name.
Incorrect results can occur when the parameter case is not typed correctly.
Command output
manageprofiles parameters
Use the following parameters with the manageprofiles command-line utility for
WebSphere ESB.
Before you begin using the manageprofiles command-line utility, make sure that
you understand all prerequisites for creating and augmenting profiles. For more
information about prerequisites, see Prerequisites for creating or augmenting
profiles. For more information about creating and augmenting profiles, see
Attention: When creating an WebSphere ESB profile, use only the parameters that
are documented in the information center for WebSphere ESB.
Linux -enableService true | Enables the creation of a Linux No. Use this parameter when
false service. Valid values include true or creating profiles only. Do not supply
false. The default value for this this parameter when augmenting an
parameter is false. existing profile.
The -importPersonalCertKS
parameter is mutually exclusive with
the -personalCertDN parameter. If
you do not specifically create or
import a personal certificate, one is
created by default.
-importPersonalCertKSType Specifies the type of the keystore file When you specify any of the
keystore_type that you specify on the parameters that begin with
-importPersonalCertKS parameter. -importPersonal, you must specify
Values might be JCEKS, CMSKS, PKCS12, them all.
PKCS11, and JKS. However, this list
can change based on the provider in
the java.security file.
-importPersonalCertKSPassword Specifies the password of the When you specify any of the
keystore_password keystore file that you specify on the parameters that begin with
-importPersonalCertKS parameter. -importPersonal, you must specify
them all.
For example:
-profilePath profile_root
where
WS_WSPROFILE_DEFAULT_PROFILE_HOME
is defined in the
wasprofile.properties file in the
install_root/properties directory.
-signingCertDN distinguished_name Specifies the distinguished name of No. The -signingCertDN parameter is
the root signing certificate that you mutually exclusive with the
create when you create the profile. -importSigningCertKS parameter.
Specify the distinguished name in
quotation marks. This default
personal certificate is located in the
server keystore file. If you do not
specifically create or import a root
signing certificate, one is created by
default. See the
-signingCertValidityPeriod parameter
and the -keyStorePassword.
-signingCertValidityPeriod An optional parameter that specifies No.
validity_period the amount of time in years that the
root signing certificate is valid. If you
do not specify this parameter with
the -signingCertDN parameter, the
root signing certificate is valid for 20
years.
Linux
webServerType=IHS: webServerInstallPath defaulted to /opt/IBM/HTTPServer
webServerType=IIS: webServerInstallPath defaulted to n\a
webServerType=SUNJAVASYSTEM: webServerInstallPath defaulted to /opt/sun/webserver
webServerType=DOMINO: webServerInstallPath defaulted to
webServerType=APACHE: webServerInstallPath defaulted to
webServerType=HTTPSERVER_ZOS: webServerInstallPath defaulted to n/a
Solaris
webServerType=IHS: webServerInstallPath defaulted to /opt/IBM/HTTPServer
webServerType=IIS: webServerInstallPath defaulted to n\a
webServerType=SUNJAVASYSTEM: webServerInstallPath defaulted to /opt/sun/webserver
webServerType=DOMINO: webServerInstallPath defaulted to
webServerType=APACHE: webServerInstallPath defaulted to
webServerType=HTTPSERVER_ZOS: webServerInstallPath defaulted to n/a
Windows
webServerType=IHS: webServerInstallPath defaulted to C:\Program Files\IBM\HTTPServer
webServerType=IIS: webServerInstallPath defaulted to C:\
webServerType=SUNJAVASYSTEM: webServerInstallPath defaulted to C:\
webServerType=DOMINO: webServerInstallPath defaulted to
webServerType=APACHE: webServerInstallPath defaulted to
webServerType=HTTPSERVER_ZOS: webServerInstallPath defaulted to n/a
-webServerName webserver_name The name of the Web server. The No. Use this parameter when
default value for this parameter is creating profiles only. Do not supply
webserver1. this parameter when augmenting an
existing profile.
-webServerOS The operating system where the Web No. Use this parameter when
webserver_operating_system server resides. Valid values include: creating profiles only. Do not supply
windows, linux, solaris, aix, hpux, this parameter when augmenting an
os390, and os400. Use this parameter existing profile.
with the webServerType parameter.
-webServerPluginPath The path to the plug-ins that the Web No. Use this parameter when
webserver_pluginpath server uses. The default value for this creating profiles only. Do not supply
parameter is install_root/plugins. this parameter when augmenting an
existing profile.
Synopsis
Description
The migrateBSpaceData command-line utility migrates the Business Space data for
an installation with the Business Space server host name specified by hostname,
Business Space server port number specified by SOAP Port Number, Business Space
administrative user ID specified by username, and password specified by password.
Note: In a stand-alone environment, SOAP Port Number refers to the SOAP port
number of your Business Space server, while in a network deployment
environment, SOAP Port Number refers to the SOAP port number of any cluster
member in the cluster.
Required parameters
-host host_name
Specifies the host name.
-port SOAP_Port_Number
Specifies the SOAP port number.
-user user_name
Specifies the administrator user name.
-password password
Specifies the administrator password.
Examples
If you are migrating from version 6.x to version 7.5, use the following examples.
v In a stand-alone environment, to migrate Business Space data with a Business
Space server host ‘localhost', port ‘8880', Business Space administrative user ID
‘admin' and password ‘admin,' use one of the following commands:
– Linux UNIX
migrateBSpaceData.sh -host localhost -port 8880 -user admin -password admin -server server1
-node leoNode01
– Windows
migrateBSpaceData.bat -host localhost -port 8880 -user admin -password admin -server server1
-node leoNode01
v In an ND environment, to migrate Business Space data with a Business Space
server host ‘localhost', port ‘8880', Business Space administrative user ID ‘admin'
and password ‘admin,' use one of the following commands:
– Linux UNIX
migrateBSpaceData.sh -host localhost -port 8880 -user admin -password admin -cluster cluster1
– Windows
migrateBSpaceData.bat -host localhost -port 8880 -user admin -password admin -cluster cluster1
If you are migrating from version 7.0.x to version 7.5, use the following examples,
for both stand-alone and ND environments.
v Use the script for your operating system to copy the Business Space data from
V7.0.x to v7.5:
– Windows migrateBSpaceData.bat -host localhost -port 8880 -user admin
-password admin -dbcopy
– Linux UNIX migrateBSpaceData.sh -host localhost -port 8880
-user admin -password admin -dbcopy
v Use the script for your operating system to upgrade the Business Space data
from V7.0.x to v7.5:
Purpose
The serviceDeploy command builds an .ear file from a .jar or .zip file that contains
service components.
Roles
Syntax
serviceDeploy inputarchive
-workingDirectory temppath
-outputApplication inputarchiveApp.ear
outputpathname.ear
-classpath jarpathname ; rarpathname ; ...
-cleanStagingModules -freeform -help -ignoreErrors
-keep -novalidate -uniqueCellID
Parameters
-inputarchive
A required, positional parameter that specifies the .jar, .zip or .ear file that
contains the application to be deployed. If the command is not issued from the
path in which the file resides, this must be the full path for the file. The .zip
file can be either a nested archive or an Eclipse Project Interchange format file.
-classpath
An optional parameter that specifies the locations of required resource files
(.jar and .rar) files. The path to each file should be a fully-qualified path
separated by semicolons (;) with no spaces.
-cleanStagingModules
An optional parameter that specifies whether to delete staging modules within
Inputs
The following file types can be used as input to the serviceDeploy command:
jar The most useful file type for the simplest applications. The resulting .ear
file contains a single .jar file and any needed generated staging modules.
The .jar file must contain the service.module file.
zip (Project Interchange)
You can export from Integration Designer an archive file in project
interchange format. This format is unique to the Eclipse development. The
exported zip file must contains exactly one project with the service.module
file. The resulting .ear file contains any number of modules, depending
upon exactly what is in the project interchange.
zip You can create a zip file containing .jar files, .war files, and .rar files.
Output
Exceptions
N/A
Synopsis
The usage differs depending on the database type and whether the command is
run in interactive mode where it prompts for its parameters or non-interactive
mode where the parameters are provided on the command line:
upgradeSchema
Database Types
v DB2: DB2 Universal Database (for all operating systems except z/OS and i5/OS)
v Oracle: Oracle Database
v SQLServer: Microsoft SQL Server
The database_user_id parameter specifies the database user for the database.
The database_password parameter specifies the database password for the database.
The database_host_name parameter specifies the host name of the system where the
database exists.
For the database that is used, the database user that is configured for the data
source must be authorized to create and alter tables and create and drop indexes
and views. For more information, refer to Databases.
Any database that is accessed by a migrated server needs to have its schema
updated before you start the server. In the case of a cluster, any database that is
accessed by any of the migrated cluster members needs to have its schema
updated before you start any of the cluster members.
Make sure the server or, if applicable, servers in the cluster remain stopped (do not
start them after the migration wizard or scripts have run before completing the
database upgrade) while executing this script.
If this script fails, there is no rollback possibility, so you must back up your
database before running the script. However, if the script is restarted, it will
attempt to continue migrating the data from the point it failed.
If this step is not completed before you start the target server, and if the
configuration supports automatic schema updates, the update will be performed as
part of server startup.
Examples
The IVT program scans the SystemOut.log file for errors and verifies core
functionality of the profile.
Important: For stand-alone profiles, the IVT also performs a System Health check
and generates a snapshot report of the overall health of your system. This report is
included in the IVT log file. You can view this report to check the status of the
application servers, nodes, deployment environments, messaging engines and their
queues, databases, system applications, and failed events on your system. The
status can be running, stopped, or unavailable. Ensure that for your stand-alone
profile, all components have the status of running.
You can start the IVT program from the command line or from the First steps
console for the profile.
The location of the installation verification test script for a profile is the
profile_root/bin directory. The script file name is:
v AIX Linux Solaris wbi_ivt.sh
v Windows wbi_ivt.bat
Parameters
Important: The -username and -password parameters are optional, but you are
prompted for them if security has been enabled.
Logging
Example
The following examples test the server1 process in the profile01 profile on the
myhost system using default_host on port 9081.
Windows
To open the information center table of contents to the location of this reference
information, click the Show in Table of Contents button ( ) on your information
center border.
Many of the command reference pages contain examples. Because these are
examples, they often show the use of explicit values for things like user names,
passwords, and server names. Be sure to use the appropriate values for your
environment when running these commands.
addNodetoDeploymentEnvDef command
Use the addNodeToDeploymentEnvDef command to add a node to an existing
deployment environment definition.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
Required parameters
-topologyName name_of_topology
Specifies the name of the deployment environment that you are adding the
node to.
-nodeRuntime name_of_node_runtime_capabilities
Note: If you are adding the node to a single cluster topology pattern, you
must set the value for -topologyRole to ADT. Failure to do so will result in an
exception error. Deployment environment topology patterns are specified when
you create the deployment environment using either the
createDeploymentEnvDef command or the Deployment Environment
Configuration wizard.
-nodeName name_of_node
Specifies the name of the node you are adding.
-serverCount number_of_servers_on_node
Optional parameters
None.
Note: The examples are for illustrative purposes only. They include variable values
and are not meant to be reused as snippets of code.
addSCMConnectivityProvider command
Use the addSCMConnectivityProvider command to add a Service Connectivity
Management (SCM) connectivity provider.
Use the following command to list all the Service Connectivity Management
administrative commands.
v Using JACL:
$AdminTask help SCMAdminCommands
v Using Jython:
print AdminTask.help(’SCMAdminCommands’)
Syntax
>>-wsadmin-- --addSCMConnectivityProvider-- -name--------------->
>--+-----------+--+----------------+--+------------+------------>
>--+-------------+--+--------------+---------------------------><
’- -authAlias-’ ’- -repertoire-’
Purpose
Parameters
-name connectivityProviderName
The name of the connectivity provider, as a string. This must be unique within
the cell. An exception is thrown if the name already exists. The name,
description, contact, organization and location will be visible to users of the
Service Federation Management console.
-node nodeName
A parameter that specifies the name of the node hosting the server to which
the proxy gateway module for a group proxy should be deployed. To add a
server as a connectivity provider, you must specify both a server and a node.
-server serverName
A parameter that specifies the name of the server to which the proxy gateway
module for a group proxy should be deployed. To add a server as a
connectivity provider, you must specify both a server and a node.
-cluster clusterName
A parameter that specifies the name of the cluster to which the proxy gateway
module for a group proxy should be deployed. To add a cluster as a
connectivity provider, you must specify just the cluster name.
-proxyHostHTTP httpHost
A parameter that specifies the host name that will be returned for the endpoint
of an insecure proxy target. This should be the host that web service clients in
another domain will use to access the proxy, taking in to account web servers
and other network components.
-proxyPortHTTP httpPort
A parameter that specifies the port that will be returned for the endpoint of an
insecure proxy target. This should be the host that web service clients in
another domain will use to access the proxy, taking in to account web servers
and other network components.
-proxyHostHTTPS httpsHost
A parameter that specifies the host name that will be returned for the endpoint
Examples
Using JACL:
$AdminTask addSCMConnectivityProvider {-name myProvider -node cpNode
-server server1 –proxyHostHTTP server1.example.com
–proxyPortHTTP 9080 –proxyHostHTTPS server1.example.com
–proxyPortHTTPS 9443}
Using Jython:
Using JACL:
$AdminTask addSCMConnectivityProvider {-name myScalableProvider
-cluster cpCluster –proxyHostHTTP webserver.example.com
–proxyPortHTTP 80 –proxyHostHTTPS webserver.example.com
–proxyPortHTTPS 443}
Using Jython:
AdminTask.addSCMConnectivityProvider(’[-name myScalableProvider
-cluster cpCluster –proxyHostHTTP webserver.example.com
–proxyPortHTTP 80 –proxyHostHTTPS webserver.example.com
–proxyPortHTTPS 443]’)
The following example demonstrates the use of the optional parameters to specify
additional information that will appear in the Service Federation Management
console and for connecting to the service registry securely:
Using JACL:
$AdminTask addSCMConnectivityProvider {-name myScalableProvider
-cluster cpCluster –proxyHostHTTP webserver.example.com
–proxyPortHTTP 80 –proxyHostHTTPS webserver.example.com
–proxyPortHTTPS 443 –description "My Connectivity Provider"
–contact "Contact Name” –organization "Owning Organization"
–location "ESB location" –authAlias REGISTRY_AUTH_ALIAS
–repertoire REGISTRY_SSL_CONFIG }
Using Jython:
AdminTask.addSCMConnectivityProvider(’[-name myScalableProvider
-cluster cpCluster –proxyHostHTTP webserver.example.com
–proxyPortHTTP 80 –proxyHostHTTPS webserver.example.com
–proxyPortHTTPS 443 –description "My Connectivity Provider"
–contact "Contact Name” –organization "Owning Organization"
–location "ESB location" –authAlias REGISTRY_AUTH_ALIAS
–repertoire REGISTRY_SSL_CONFIG]’)
BPMExport command
This command exports a process application snapshot from Process Center.
Purpose
Use the BPMExport command in connected mode from a Process Center server to
export a process application snapshot. The exported snapshot is saved as a .twx
file; you can import it into another Process Center server.
Example
v Jacl example
wsadmin -conntype SOAP -port 4080 -host ProcessCenterServer01.mycompany.com -user admin -password admin
$AdminTask BPMExport {-containerAcronym BILLDISP -containerSnapshotAcronym SS2.0.1 -containerTrackAcronym Main -outputFile C:\processApps\BILLDISP201.twx}
BPMImport command
This command imports a process application into the Process Center.
Purpose
Use the BPMImport command in connected mode from a Process Center server to
import a process application that was exported from a different Process Center
server.
Note: In a network deployment environment, the input file is read from the
machine on which the connected server is running. If you want to access the file
from another machine, establish a remote wsadmin session from the current
machine to the server on the machine where the file is stored.
Parameters
-inputFile inputFilePath
A required parameter that identifies the absolute path for the exported file (a
.twx file) you are importing.
The following example illustrates how to import the BILLDISP.twx file into the
Process Center Server. In the example, the user establishes a SOAP connection to
the Process Center server.
v Jython example
wsadmin -conntype SOAP -port 4080 -host ProcessCenterServer01.mycompany.com -user admin -password admin
AdminTask.BPMImport(’[-inputFile C:\processApps\BILLDISP.twx]’)
v Jacl example
wsadmin -conntype SOAP -port 4080 -host ProcessCenterServer01.mycompany.com -user admin -password admin
configEventServiceDB2DB command
Use the configEventServiceDB2DB command to configure the Common Event
Infrastructure using a DB2 database.
Purpose
Parameters
- createDB
The command generates the DDL database scripts and creates the database
when this parameter is set to true. The command only generates the DDL
database scripts when this parameter is set to false. To create the database, the
current server must be already configured to run the database commands. The
default value is false if not specified.
- overrideDataSource
Optional database script output directory. When this parameter is specified, the
command generates the event service database scripts in the specified
directory. If the specified directory does not contain a full path, the command
creates the specified directory in profile_root/bin. The default database script
output directory is profile_root/databases/event/node/server/dbscripts/dbtype if
this parameter is not specified.
- nodeName
The name of the node that contains the server where the event service data
source should be created. If this parameter is specified, then the serverName
parameter must be set. You must not specify this parameter if the clusterName
parameter is specified.
- serverName
The name of the server where the event service data source should be created.
If this parameters is specified without the nodeName parameter, the command
will use the node name of the current WebSphere profile. You must not specify
this parameter if the clusterName parameter is specified.
- clusterName
The name of the cluster where the event service data source should be created.
Sample
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
AdminTask.configEventServiceDB2DB(’[-interactive]’)
v Using Jython list:
AdminTask.configEventServiceDB2DB([’-interactive’])
configEventServiceDB2ZOSDB command
Use the configEventServiceDB2ZOSDB command to configure the Common Event
Infrastructure using a DB2 for z/OS database.
Purpose
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
Parameters
- createDB
The command generates the DDL database scripts and creates the database
when this parameter is set to true. The command only generates the DDL
database scripts when this parameter is set to false. To create the database, the
current server must be already configured to run the database commands. The
default value is false if not specified.
- overrideDataSource
When this parameter is set to true, the command removes any existing event
service data source at the specified scope before creating a new one. When this
Sample
configEventServiceOracleDB command
Use the configEventServiceOraclesDB command to configure the Common Event
Infrastructure using an Oracle database.
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
Parameters
- createDB
The command generates the DDL database scripts and creates the database
when this parameter is set to true. The command only generates the DDL
database scripts when this parameter is set to false. To create the database, the
current server must be already configured to run the database commands. The
default value is false if not specified.
- overrideDataSource
Optional database script output directory. When this parameter is specified, the
command generates the event service database scripts in the specified
directory. If the specified directory does not contain a full path, the command
creates the specified directory in profile_root/bin. The default database script
output directory is profile_root/databases/event/node/server/dbscripts/dbtype if
this parameter is not specified.
- nodeName
The name of the node that contains the server where the event service data
source should be created. If this parameter is specified, then the serverName
parameter must be set. You must not specify this parameter if the clusterName
parameter is specified.
- serverName
The name of the server where the event service data source should be created.
If this parameters is specified without the nodeName parameter, the command
will use the node name of the current WebSphere profile. You must not specify
this parameter if the clusterName parameter is specified.
- clusterName
The name of the cluster where the event service data source should be created.
If this parameter is specified, then the serverName and nodeName parameters
must not be set. You must not specify this parameter if the serverName and
nodeName parameters are specified.
- jdbcClassPath
The path to the JDBC driver. Specify only the path to the driver file; do not
include the file name in the path. This parameter is required.
- oracleHome
The ORACLE_HOME directory. This parameter must be set when the
parameter createDB is set to true.
- dbHostName
The host name of the server where the Oracle database is installed. The default
value is localhost if not specified.
Sample
configEventServiceSQLServerDB command
Use the configEventServiceSQLServerDB command to configure the Common
Event Infrastructure (CEI) using a SQL Server database.
Purpose
For more information about the AdminTask object, see the WebSphere Application
Server documentation.
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
Parameters
- createDB
The command generates the DDL database scripts and creates the database
Sample
configRecoveryForCluster command
Use the configRecoveryForCluster command to specify a cluster, that you have
configured to support Service Component Architecture, to manage failed events.
Purpose
}
—remoteMELocation remoteMESpec
remoteMESpec:
WebSphere:cluster= clusterName
WebSphere:node= nodeName , server= serverName
Parameters
-clusterName=clusterName
A required parameter that specifies the cluster you are configuring.
-remoteMELocation locationSpecification
Specifies the location of a remote messaging engine. locationSpecification can be
either of
Examples
configRecoveryForServer command
Use the configRecoveryForServer command to specify a server, that you have
configured to support Service Component Architecture, to manage failed events.
Purpose
—nodeName nodeName }
—remoteMELocation remoteMESpec
remoteMESpec:
WebSphere:cluster= clusterName
WebSphere:node= nodeName , server= serverName
Parameters
-serverName serverName
A required parameter that specifies the server you are configuring.
-nodeName nodeName
A required parameter that specifies the node to which the server belongs.
-remoteMELocation locationSpecification
Specifies the location of a remote messaging engine. locationSpecification can be
either of
v WebSphere:cluster=clusterName
v WebSphere:node=nodeName,server=serverName
configSCAAsyncForCluster command
Use this command instead of the administrative console to configure a cluster to
run Service Component Architecture (SCA) applications. You can specify a number
of commands in a file to batch a large number of configurations without having to
navigate the administrative console panels.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
-clusterName nameofCluster
A required parameter that identifies the cluster that you are configuring.
-meAuthAlias myAlias
An existing authentication alias used to access the messaging engine.
-remoteMELocation remoteMELocation
The location of a remote messaging engine. Specify remoteMELocation in one of
the following ways:
v WebSphere:cluster=clustername
v WebSphere:node=nodeName,server=serverName
-systemBusDataSource nameOfSystemBusSource
An existing data source you are using for the SCA system bus.
-systemBusSchemaName nameOfSystemBusSchema
The schema name for the system bus messaging engine. The default for this
parameter is IBMWSSIB.
Examples
The following example illustrates how to configure the cluster mySCAAppCluster for
SCA using the remote messaging engine NJMECluster:
v Jython example:
AdminTask.configSCAAsyncForCluster(’[-clusterName mySCAAppCluster
-remoteMELocation WebSphere:cluster=NJMECluster -meAuthAlias myAlias
-systemBusSchemaName NYSysSchema]’)
v Jacl example:
$AdminTask configSCAAsyncForCluster {-clusterName mySCAAppCluster
-remoteMELocation WebSphere:cluster=NJMECluster -meAuthAlias myAlias
-systemBusSchemaName NYSysSchema}
configSCAAsyncForServer command
Use the configSCAAsyncForServer command instead of the administrative console
to configure a specific server to run Service Component Architecture (SCA)
applications. You can specify a number of commands in a file to batch a large
number of configurations without having to navigate the administrative console
panels.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
-serverName nameofserver
A required parameter that identifies the server you are configuring
Optional parameters
-systemBusDataSource nameOfSystemBusSource
An existing data source you are using for the SCA system bus.
-remoteMELocation remoteMELocation
The location of a remote messaging engine. Specify remoteMELocation in one of
the following ways:
v WebSphere:cluster=clustername
v WebSphere:node=nodeName,server=serverName
-meAuthAlias myAlias
An existing authentication alias used to access the messaging engine.
-systemBusSchemaName nameOfSystemBusSchema
The schema name for the system bus messaging engine. The default for this
parameter is IBMWSSIB.
-createTables true | false
An optional parameter that specifies whether to create tables for the messaging
engine data store. The default value for this parameter is true.
-systemBusId cellname | nameForSystemBus
An optional parameter that specifies the ID to be used for the SCA system bus
name. The default is cellname.
Examples
configSCAJMSForCluster command
Use the configSCAJMSForCluster command instead of the administrative console to
configure a cluster to run Service Component Architecture (SCA) applications that
Use the following command to list all the SCA administrative commands.
v Jython example:
AdminTask.help(’[SCAAdminCommands]’)
v Jacl example:
$AdminTask help SCAAdminCommands
Use the following command to get detailed help on a particular command.
wsadmin> $AdminTask help command_name
v Jython example:
AdminTask.help(’[command_name]’)
v Jacl example:
$AdminTask help command_name
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required Parameters
-clusterName nameofCluster
A required parameter that identifies the cluster that you are configuring.
Optional parameters
-appBusDataSource SCAApplicationBusSource
An existing data source that you are using for the SCA.APPLICATION bus.
-appBusSchemaName appMEBusSchema
The schema name for the SCA.APPLICATION bus messaging engine. The
default for this parameter is IBMWSSIB.
-createTables true | false
An optional parameter that specifies whether to create tables for the messaging
engine data store. The default value for this parameter is true.
-meAuthAlias userid
An existing authentication alias used to access the messaging engine.
-remoteMELocation remoteMELocation
The location of a remote messaging engine. Specify this parameter if the SCA
modules deployed on this server are to use their queue destinations hosted on
a messaging engine in another server or cluster.
Examples
The following example illustrates how to configure the cluster mySCAAppCluster for
SCA using the remote messaging engine NJMECluster:
v Jython example:
AdminTask.configSCAJMSForCluster(’[-clusterName mySCAAppCluster
-remoteMELocation WebSphere:cluster=NJMECluster -meAuthAlias mySCAAlias
-appBusSchemaName NYSysSchema]’)
v Jacl example:
$AdminTask configSCAJMSForCluster {-clusterName mySCAAppCluster
-remoteMELocation WebSphere:cluster=NJMECluster -meAuthAlias mySCAAlias
-appBusSchemaName NYSysSchema}
configSCAJMSForServer command
You can use the configSCAJMSForServer instead of the administrative console to
configure a specific server to run Service Component Architecture (SCA)
applications that use JMS resources. You can specify a number of commands in a
file to batch a large number of configurations without having to navigate the
administrative console panels.
Use the following command to list all the SCA administrative commands.
v Jython example:
AdminTask.help(’[SCAAdminCommands]’)
v Jacl example:
$AdminTask help SCAAdminCommands
Use the following command to get detailed help on a particular command.
wsadmin> $AdminTask help command_name
v Jython example:
AdminTask.help(’[command_name]’)
v Jacl example:
$AdminTask help command_name
Required Parameters
-nodeName nameofnode
A required parameter that identifies the node to which the server belongs.
-serverName nameofserver
A required parameter that identifies the server you are configuring
Optional Parameters
-appBusDataSource SCAApplicationBusSource
An existing data source that you are using for the SCA.APPLICATION bus.
-appBusSchemaName appMEBusSchema
The schema name for the SCA.APPLICATION bus messaging engine. The
default for this parameter is IBMWSSIB.
-createTables true | false
An optional parameter that specifies whether to create tables for the messaging
engine data store. The default value for this parameter is true.
-meAuthAlias userid
An existing authentication alias used to access the messaging engine.
-remoteMELocation remoteMELocation
The location of a remote messaging engine. Specify this parameter if the SCA
modules deployed on this server are to use their queue destinations hosted on
a messaging engine in another server or cluster.
Specify remoteMELocation in one of the following ways:
v WebSphere:cluster=clustername
v WebSphere:node=nodeName,server=serverName
Examples
createDeploymentEnvDef command
Use the createDeploymentEnvDef command to specify a new deployment
environment definition (with a specific name) for a particular feature and pattern.
The XML document that results from running this command provides the
definition of the deployment environment.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
-topologyName name_of_topology
Specifies the name of the deployment environment you are creating.
-topologyPattern patternName
Specifies the topology pattern for your deployment manager.
Patterns have a direct relationship to the products supported by the configured
deployment manager. WebSphere ESB supports a specific set of patterns.
Remote messaging and remote support is the pattern to employ for a network
deployment production environment. If your deployment manager supports
other products in addition to WebSphere ESB, the patterns for those products
may apply. Consult product-specific documentation for information about
patterns as they apply to the products. For more information about patterns,
see the Planning documentation.
Valid values include the following:
v SingleCluster
This pattern combines the application deployment target, the messaging
support, and additional support functions into a single server or cluster.
This pattern is supported in a multiproduct installation of IBM Business
Monitor + WebSphere ESB + WebSphere ESB.
v RemoteMessaging
This pattern separates the application deployment target cluster from the
cluster providing the messaging support and additional support functions.
Optional parameters
-dbDesign Comma separated database design files
Specifies the path to a database design document, which holds the database
configuration for the topology you are creating.
If you include this parameter, the value must be the complete path name for
the database design file. If you are importing more than one database design
file, separate them with commas (,). For an installation that includes
WebSphere ESB only, a single database design file would suffice. If your
deployment manager supports multiple products (for example, WebSphere ESB
and IBM Business Monitor) there would be a database design document for
each product.
-propFile properties_file
Specifies the path to a properties file.
Examples
Note: The examples are for illustrative purposes only. They include variable values
and are not meant to be reused as snippets of code.
createVersionedSCAModule command
This command creates a unique instance of an SCA module EAR file. The SCA
module must have the sca.module.attributes file as part of its content. The
sca.module.attributes file exists for all versioned SCA modules prior to
WebSphere Enterprise Service Bus Version 7.0 and for all modules created with
Integration Designer Version 7.0.
Purpose
Syntax
Command output
Example
createWSRRDefinition command
Use this command to create a WSRR definition.
Use the following command to list all the WSRR administrative commands.
Syntax
$AdminTask createWSRRDefinition {-paramName paramValue ...}
Required parameters
-name definitionName
The name of the WSRR definition, as a string.
-description defDescription
Brief description of the definition. This is optional, for your own reference.
-connectionType WEBSERVICE
Connection type. Currently the only connection type is WEBSERVICE.
-defaultCacheExpiryTimeout timeout
Timeout of the cache, in seconds. A value of 0 indicates that query results are
never cached. Default is 300.
Steps
To set properties for a Web service connection associated with the WSRR
definition, you can specify values for the registry URL, the authentication alias and
the SLL configuration as follows:
-WSConnection {{registryURL authAlias repertoire}}
To use the default registry URL (which is http://localhost:9080/
WSRRCoreSDO/services/WSRRCoreSDOPort), specify a pair of double
quotation marks ("") for the first value. To omit the authentication alias, specify
a pair of double quotation marks ("") for the second value. To omit the
repertoire, (the SLL configuration), specify a pair of double quotation marks
("") for the third value.
Examples
v Jacl example:
$AdminTask createWSRRDefinition {-name mydefName -description "my description"
-defaultCacheExpiryTimeout 300 -connectionType WEBSERVICE
-WSConnection {{ http://localhost:9080 AUTH_ALIAS1 SSL_CONFIG1 }}}
deleteDeploymentEnvDef command
Use the deleteDeploymentEnvDef command to delete an existing deployment
environment definition on the deployment manager.
You can delete the deployment environment definition from the deployment
manager when you no longer want the deployment manager to manage the objects
defined by the definition as a unit. This will not affect any existing servers/clusters
that are configured.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
-topologyName name_of_topology
Specifies the name of the deployment environment definition that you are
deleting on the deployment manager.
Optional parameters
None.
Examples
Note: The examples are for illustrative purposes only. They include variable values
and are not meant to be reused as snippets of code.
deleteSCADestinations.jacl script
Use the deleteSCADestinations.jacl script to remove Service Component
Architecture (SCA) destinations associated with a particular module.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required Parameters
moduleName
A mandatory parameter that specifies the module for which you are deleting
the SIBus destinations.
Optional parameters
-conntype none
An optional parameter that specifies that all connection types are affected.
-force
An optional parameter that specifies that the destinations associated with
moduleName should be removed even if the destination is currently active.
Example
This command deletes all of the inactive destinations associated with module,
MyModule.
deleteWSRRDefinition command
Use this command to delete a WSRR definition that has been named or supplied as
a target object.
If the definition cannot be found, or the name and target object are both supplied
but conflict, an exception will be thrown. This command will only delete the
default WSRR definition if it's the only definition in the cell. If there are other
definitions present in the cell, the command will fail and you will need to need to
change the default to another WSRR definition before the current one can be
deleted.
Use the following command to list all the WSRR administrative commands.
Syntax
$AdminTask deleteWSRRDefinition {-paramName paramValue ...}
Required parameters
-name definitionName
The name of the WSRR definition to be deleted, as a string.
Example
v Jython example:
AdminTask.deleteWSRRDefinition(’[-name MydefName]’)
v Jacl example:
$AdminTask deleteWSRRDefinition {-name MydefName}
deployEventService command
Use the deployEventService command to deploy the event service application onto
your server.
Purpose
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
Parameters
- nodeName
The name of the node where the event service should be deployed. If this
parameter is specified, then the serverName parameter must be specified. You
must not specify this parameter if the clusterName parameter is specified.
- serverName
The name of the server where the event service should be deployed. You must
specify this parameter if the nodeName parameter is specified. You must not
specify this parameter if the clusterName parameter is specified.
- clusterName
The name of the cluster where the event service should be deployed. You must
not specify this parameter if the nodeName or serverName parameter are
specified.
- enable
Set this parameter to true if you want the event service to be started after the
next restart of the server. The default value is true.
AdminTask.deployEventService(’[-clusterName cluster_name
-enable false]’)
v Using Jython list:
AdminTask.deployEventService([’-nodeName’, ’node_name’,
’-serverName’, ’-server_name’])
AdminTask.deployEventService([’-clusterName’, ’cluster_name’,
’-enable’, ’false’])
deployEventServiceMdb command
Use the deployEventServiceMdb command to deploy the event service message
driven bean onto your server.
Purpose
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
Parameters
- nodeName
The name of the node where the event service MDB should be deployed. If
this parameter is specified, then the serverName parameter must be specified.
You must not specify this parameter if the clusterName parameter is specified.
- serverName
The name of the server where the event service MDB should be deployed. You
Sample
Purpose
Note: Vista The product uses a Jython version that does not support Microsoft
Windows 2003 or Windows Vista operating systems.
Parameters
- nodeName
The name of the node where the event service should be disabled. If this
parameter is specified, then the serverName parameter must be specified. You
must not specify this parameter if the clusterName parameter is specified.
- serverName
The name of the server where the event service should be disabled. You must
specify this parameter if the nodeName parameter is specified. You must not
specify this parameter if the clusterName parameter is specified.
- clusterName
The name of the cluster where the event service should be disabled. You must
not specify this parameter if the nodeName and serverName parameters are
specified.
Sample
AdminTask.disableEventService(’[-clusterName clustername]’)
v Using Jython list:
AdminTask.disableEventService([’-nodeName’, ’nodename’,
’-serverName’, ’-servername’])
AdminTask.disableEventService([’-clusterName’, ’clustername’])
enableEventService command
Use the enableEventService command to enable the event service on your server.
Purpose
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
DITA
Parameters
- nodeName
The name of the node where the event service should be enabled. If this
parameter is specified, then the serverName parameter must be specified. You
must not specify this parameter if the clusterName parameter is specified.
- serverName
The name of the server where the event service should be enabled. You must
specify this parameter if the nodeName parameter is specified. You must not
specify this parameter if the clusterName parameter is specified.
- clusterName
The name of the cluster where the event service should be enabled. You must
not specify this parameter if the nodeName and serverName parameters are
specified.
Sample
AdminTask.enableEventService(’[-clusterName clustername]’)
v Using Jython list:
AdminTask.enableEventService([’-clusterName’, ’clustername’])
exportDeploymentEnvDef command
Use the exportDeploymentEnvDef command to export topologies from the
deployment manager.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
-filePath directory_location_of_deployment_environment_file
Specifies the name of the file that contains the deployment environment
definition you are exporting. The value set is the full path and file name.
-topologyName name_of_topology
Specifies the deployment environment on this deployment manager to export.
Optional parameters
None.
Examples
generateDeploymentEnv command
Use generateDeploymentEnv to configure deployment environments on a
deployment manager.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
-topologyName myEnvName
Specifies the deployment environment to configure.
Optional parameters
None.
Example
Note: The examples are for illustrative purposes only. They include variable values
and are not meant to be reused as snippets of code.
getDefaultWSRRDefinition command
Use this command to return the current default WSRR definition. If no default is
set (this should only happen if there are no WSRR definitions), then it will return
null.
Use the following command to list all the WSRR administrative commands.
$AdminTask help SIBXWSRRAdminCommands
Syntax
$AdminTask getDefaultWSRRDefinition
Required parameters
Not applicable.
Example
v Jython example:
AdminTask.getDefaultWSRRDefinition()
v Jacl example:
$AdminTask getDefaultWSRRDefinition
getBPMTargetSignificance command
Use the getBPMTargetSignificance command to discover the target significance on
all the resources for a given deployment target. The deployment target can be
either a server or cluster.
For more information on how targetSignificance values affect processing, see the
WebSphere Application Server Information Center.
Required parameters
clusterName name_of_cluster
The name of the cluster on which you are discovering target significance.
nodeName name_of_node
The name of the node on which you are discovering target significance.
If you specify this parameter, then you must also specify the serverName
parameter. Do not specify this parameter if you have specified the clusterName
parameter.
serverName serverName
The name of the server on which you are discovering target significance.
If you specify this parameter, then you must also specify the nodeName
parameter. Do not specify this parameter if you have specified the clusterName
parameter.
Optional parameters
printDetails true or false
The default value for this parameter is false. The value entered indicates
whether or not to print details for getBPMTargetSignificance command.
If -printDetails = true, the command prints the current setting for
targetSignificance used for all the activation specification and connection
factories configured on the given cluster or server.
Examples
importDeploymentEnvDef command
Use the importDeploymentEnvDef command to import topologies into the
deployment manager.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
-filePath directory_location_of_deployment_environment_file
Specifies the full path and file name of the file that contains the deployment
environment definition you are importing.
-topologyName name_of_topology
Renames the imported deployment environment on this deployment manager.
Optional parameters
None.
Examples
Use the following command to list all the WSRR administrative commands.
$AdminTask help SIBXWSRRAdminCommands
Syntax
$AdminTask isDefaultWSRRDefinition {-paramName paramValue ...}
Required parameters
-name definitionName
The name of the WSRR definition, as a string.
Example
v Jython example:
AdminTask.isDefaultWSRRDefinition(’[-name MydefName]’)
v Jacl example:
$AdminTask isDefaultWSRRDefinition {-name MydefName}
When you use a WSRR command in interactive mode, you can ignore the prompt
for a Target WSRR definition, because the target WSRR definition is optional.
Press Enter and you will be prompted for the Name of the WSRR definition. This
allows you to enter the name. When you enter the name, you see the expected
output without the error message; for example, if you enter the definition name
when prompted for the target definition, you get a message similar to the
following:
wsadmin>AdminTask.isDefaultWSRRDefinition(’-interactive’)
Check if a WSRR definition is the default
listSCAExports command
Use the listSCAExports command to list the exports of a Service Component
Architecture (SCA) module.
Required parameters
-moduleName moduleName
SCA module name.
Optional parameters
-applicationName applicationName
The name of the application associated with the SCA module. Providing an
applicationName improves performance.
Example
listSCAImports command
Use the listSCAImports command to list the imports of a Service Component
Architecture (SCA) module.
Required parameters
-moduleName moduleName
SCA module name.
Optional parameters
-applicationName applicationName
The name of the application associated with the SCA module. Providing an
applicationName improves performance.
Example
listSCAModules command
Use the listSCAModules command to list the SCA modules deployed to a cell.
This command lists all the SCA modules that have been deployed to the cell and
the applications or process applications they are associated with.
Required parameters
None.
Optional parameters
None.
Example
The output looks similar to the following, depending on which modules you have
deployed.
"PADEMO-2.0-SCAM1:PADEMO-2.0-SCAM1App:2.0:SCAM1: :"
"PADEMO-2.0-SCAM2:PADEMO-2.0-SCAM2App:2.0:SCAM2: :"
"PADEMO-3.0-SCAM1:PADEMO-3.0-SCAM1App:3.0:SCAM1: :"
"PADEMO-3.0-SCAM2:PADEMO-3.0-SCAM2App:3.0:SCAM2: :"
"PADEMO-6.0DS-SCAM1:PADEMO-6.0DS-SCAM1App:6.0DS:SCAM1: :"
"PADEMO-6.0DS-SCAM2:PADEMO-6.0DS-SCAM2App:6.0DS:SCAM2: :"
servicemonitor_v6_0_0_synthr:servicemonitor_v6_0_0_synthrApp:6.0.0:servicemonitor:synthr:"
listSCMConnectivityProviders command
Use the listSCMConnectivityProviders command to return a list of all the
parameters for a Service Connectivity Management (SCM) connectivity providers
that exist in the cell.
Use the following command to list all the Service Connectivity Management
administrative commands.
Syntax
>>-wsadmin-- --listSCMConnectivityProviders-------------------><
Purpose
Parameters
Not applicable.
Example
Using JACL:
$AdminTask listSCMConnectivityProviders
Using Jython:
AdminTask.listSCMConnectivityProviders()
listWSRRDefinitions command
Use this command to return a list of all the WSRR definitions that exist in the cell.
Use the following command to list all the WSRR administrative commands.
Syntax
$AdminTask listWSRRDefinitions
Required parameters
Not applicable.
Example
v Jython example:
AdminTask.listWSRRDefinitions()
v Jacl example:
$AdminTask listWSRRDefinitions
modifySCAExportHttpBinding command
Use the modifySCAExportHttpBinding command to change the attributes of an
HTTP export binding.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
–moduleName moduleName
The name of the module associated with the export.
-export export
The name of the export.
Optional parameters
-applicationName applicationName
The name of the application.
Examples
modifySCAExportJMSBinding command
Use the modifySCAExportJMSBinding command to change the attributes of a JMS
export binding. This command applies to JMS bindings, WebSphere MQ JMS
bindings, or Generic JMS bindings.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
–moduleName moduleName
The name of the SCA module associated with the export.
-export export
The name of the export.
-type jmsType
The type of binding. Valid values are JMS, MQJMS, or generic.
Optional parameters
-applicationName applicationName
The name of the application.
-connectionFactory connectionFactoryName
The JNDI name of the connection factory.
Note: This parameter is valid only if the type parameter is set to JMS or MQJMS
and a Version 7 application has been deployed to the runtime environment.
-sendDestination sendDestinationName
The JNDI name of the send destination.
-activationSpec activationSpecName
The JNDI name of the activation specification.
Example
To change the send destination of a JMS export binding called Export1 in a module
called MyMod to MyDest:
v Jython example:
AdminTask.modifySCAExportJMSBinding(’[-moduleName MyMod
-export Export1 -type JMS -sendDestination MyDest]’)
v Jacl example:
$AdminTask modifySCAExportJMSBinding {-moduleName MyMod
-export Export1 -type JMS -sendDestination MyDest}
modifySCAExportMQBinding command
Use the modifySCAExportMQBinding command to change the attributes of a
WebSphere MQ export binding.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
–moduleName moduleName
The name of the SCA module associated with the export.
-export export
The name of the export.
Optional parameters
-applicationName applicationName
The name of the application.
-connectionFactory connectionFactoryName
The JNDI name of the connection factory.
-sendDestination sendDestinationName
The JNDI name of the send destination.
Note: This parameter is valid only for Version 7 applications that have been
deployed to the runtime environment.
-listenerPort listenerPortName
The JNDI name of the listener port.
Note: This parameter is valid only for Version 6 applications that have been
deployed to the runtime environment.
Example
modifySCAImportEJBBinding command
Use the modifySCAImportEJBBinding command to modify the attributes of an
Enterprise JavaBeans (EJB) import binding.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
–moduleName moduleName
The name of the module associated with the import.
-import import
The name of the import.
-jndiName import
The modified property JNDI name of the import binding.
Optional parameters
-applicationName applicationName
The name of the application connecting to the import.
Example
modifySCAImportHttpBinding command
Use the modifySCAImportHttpBinding command to change the attributes of an
HTTP import binding.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
–moduleName moduleName
The name of the module associated with the import.
-import import
The name of the import.
Optional parameters
-applicationName applicationName
The name of the application.
-endpointURL endpointURL
The endpoint URL. The default value is the URI originally specified when the
module was created in IBM Integration Designer.
-endpointHttpMethod methodName
The endpoint URL method. The default value is the method originally
specified when the module was created in IBM Integration Designer.
-endpointHttpVersion version
The endpoint HTTP version, which can be 1.1 or 1.0. The default is 1.1.
-authAlias authenticationAlias
The authentication alias to use with the HTTP server.
-sslConfiguration configuration
The Secure Sockets Layer (SSL) configuration to use for this binding.
Examples
modifySCAImportJMSBinding command
Use the modifySCAImportJMSBinding command to change the attributes of a JMS
import binding. This applies to JMS bindings, WebSphere MQ JMS bindings, or
Generic JMS bindings.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
–moduleName moduleName
The name of the module associated with the import.
-import import
The name of the import.
-type jmsType
The type of binding. Valid values are JMS, MQJMS, or generic.
Optional parameters
-applicationName applicationName
The name of the application.
-connectionFactory connectionFactoryName
The JNDI name of the connection factory.
-connectionFactoryFailedEventReplay connectionFactoryName
The JNDI name of the failed event replay connection factory.
Note: This parameter is valid only if the type parameter is set to JMS or MQJMS
and a Version 7 application has been deployed to the runtime environment.
-connectionFactoryResponse connectionFactoryName
The JNDI name of the response connection factory.
Example
modifySCAImportMQBinding command
Use the modifySCAImportMQBinding command to change the attributes of a
WebSphere MQ import binding.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Optional parameters
-applicationName applicationName
The name of the application.
-connectionFactory connectionFactoryName
The JNDI name of the connection factory.
-sendDestination sendDestinationName
The JNDI name of the send destination.
-activationSpecification activationSpecName
The JNDI name of the activation specification.
Note: This parameter is valid only for Version 7 applications that have been
deployed to the runtime environment.
-listenerPort listenerPortName
The JNDI name of the listener port.
Note: This parameter is valid only for Version 6 applications that have been
deployed to the runtime environment.
Example
modifySCAImportSCABinding command
Use the modifySCAImportSCABinding command to change attributes of Service
Component Architecture (SCA) import bindings.
Note: The default binding type is also referred to as an SCA binding. Therefore, an
SCA module can have an import with an SCA binding.
An SCA binding connects one SCA module to another SCA module.
This command changes the SCA import binding for a particular SCA module. A
warning is issued if you select an export whose interface does not match the
interface of your import. WebSphere ESB compares the WSDL port type names of
the import and export. If they are not the same, a warning is issued. However, if
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
–moduleName moduleName
The name of the SCA module that contains the import.
-import import
The name of the import.
-targetModule targetModuleName
The name of the target module.
-targetExport targetExportName
The name of the target export.
Optional parameters
-applicationName applicationName
The name of the application.
-targetApplicationName targetApplicationName
The name of the application associated with the target SCA module. Providing
a targetApplicationName improves performance.
Example
modifySCAImportWSBinding command
Use the modifySCAImportWSBinding command to change the attributes of a Web
service import binding.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
Required parameters
–moduleName moduleName
The name of the module associated with the import.
-import import
The name of the import.
-endpoint targetEndpointName
The name of the target endpoint, which must be a valid endpoint URL.
Optional parameters
-applicationName applicationName
The name of the application.
Example
modifySCAModuleProperty command
Use the modifySCAModuleProperty command to modify the property values for a
specified Service Component Architecture (SCA) module.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
-moduleName moduleName
SCA module name.
-propertyName propertyName
SCA module property name.
If the property is a member of a group, you must include the name of the
group in the property name in the form:
-propertyName [groupName]propertyName
-newPropertyValue propertyValue
New value for the SCA module property.
Example
Note: This example uses the Jython list format for the command arguments to
escape the group delimiter characters [ and ] in the property name parameter
value:
AdminTask.modifySCAModuleProperty([’-moduleName’, ’MyModule’,
’-applicationName’, ’myApplication’, ’-propertyName’, ’[mygroupName]mypropName’,
’-newPropertyValue’, ’myNewPropValue’])
v Jacl example:
$AdminTask modifySCAModuleProperty {-moduleName MyModule
-applicationName myApplication -propertyName [mygroupName]mypropName
-newPropertyValue myNewPropValue}
modifySCMConnectivityProvider command
Use the modifySCMConnectivityProvider command to modify the details of a
Service Connectivity Management (SCM) connectivity provider.
Use the following command to list all the Service Connectivity Management
administrative commands.
v Using JACL:
$AdminTask help SCMAdminCommands
v Using Jython:
print AdminTask.help(’SCMAdminCommands’)
>--+-----------------+--+-----------------+--+------------------+-->
’- -proxyHostHTTP-’ ’- -proxyPortHTTP-’ ’- -proxyHostHTTPS-’
>--+------------------+--+-----------+--+----------------+--------->
’- -proxyPortHTTPS-’ ’- -contact-’ ’- -organization-’
>--+------------+--+-------------+--+--------------+--------------->
’- -location-’ ’- -authAlias-’ ’- -repertoire-’
Purpose
Parameters
-name connectivityProviderName
The name of the connectivity provider to be modified, as a string.
target
The connectivity provider target object.
-proxyHostHTTP
A parameter that specifies the host name that will be returned for the endpoint
of an insecure proxy target. This should be the host that web service clients in
another domain will use to access the proxy, taking in to account web servers
and other network components. If not specified, the current value is retained.
-proxyPortHTTP
A parameter that specifies the port that will be returned for the endpoint of an
insecure proxy target. This should be the host that web service clients in
another domain will use to access the proxy, taking in to account web servers
and other network components. If not specified, the current value is retained.
-proxyHostHTTPS
A parameter that specifies the host name that will be returned for the endpoint
of a secure proxy target. This should be the host that web service clients in
another domain will use to access the proxy, taking in to account web servers
and other network components. If not specified, the current value is retained.
-proxyPortHTTPS
A parameter that specifies the port that will be returned for the endpoint of a
secure proxy target. This should be the host that web service clients in another
domain will use to access the proxy taking in to account web servers and other
network components. If not specified, the current value is retained.
Example
The following example modifies the description and registry security settings for
the connectivity provider myProvider:
Using JACL:
$AdminTask modifySCMConnectivityProvider {
-name myScalableProvider –description "New description"
–authAlias NEW_REGISTRY_AUTH_ALIAS
–repertoire NEW_REGISTRY_SSL_CONFIG }
Using Jython:
AdminTask.modifySCMConnectivityProvider(
’[-name myScalableProvider –description "New description"
–authAlias NEW_REGISTRY_AUTH_ALIAS
–repertoire NEW_REGISTRY_SSL_CONFIG]’)
modifyWSRRDefinition command
Use the modifyWSRRDefinition command to modify the details of a WSRR
definition, given its name.
Use the following command to list all the WSRR administrative commands.
$AdminTask help SIBXWSRRAdminCommands
Syntax
$AdminTask modifyWSRRDefinition {-paramName paramValue ...}
Required parameters
-name definitionName
The name of the WSRR definition, as a string.
-newName newdefName
The new name of the WSRR definition, as a string, if required.
-description defDescription
Brief description of the definition.
-defaultCacheExpiryTimeout timeout
Timeout of the cache, in seconds. A value of "0" indicates that query results are
never cached.
Steps
Only one step can be specified or an exception will be thrown. Select the step that
matches the definition's connection.
To modify the properties for a web service connection associated with the WSRR
definition, you can specify values for the registry URL and the authentication alias
as follows:
-WSConnection {{ registryURL authAlias repertoire }}
If you do not want to specify one of these values, use ”".
The default connection has a default registry URL, and no authentication alias or
repertoire (SSL configuration).
Example
v Jython example:
v Jacl example:
$AdminTask modifyWSRRDefinition {-name MydefName -newName newdefName
-description "my new description" -defaultCacheExpiryTimeout 300
-WSConnection {{ http://localhost:9084 NEW_AUTH_ALIAS NEW_SSL_CONFIG }}}
moveCEIServer command
Use the moveCEIServer command to move the Common Event Infrastructure (CEI)
server from one deployment target to another deployment target.
This command moves the CEI server from an existing deployment target to a new
deployment target. Use this command as part of a larger strategy to change your
configuration. For example, you could use this command to move the CEI server
from an application cluster to a support cluster.
Note: When running moveCEIServer individually to move the CEI server from an
application cluster to a support cluster, you must first make sure that the support
cluster to which you are moving the CEI server exists.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
fromClusterName name_of_cluster_on_which_CEI_is_installed
The name of the cluster on which CEI is currently installed.
Do not specify this parameter if the fromNodeName or fromServerName
parameters are specified
fromNodeName name_of_node_on_which_CEI_is_installed
The name of the node on which CEI is currently installed.
If you specify this parameter, then you must also specify the fromServerName
parameter. Do not specify this parameter if you have specified the
fromClusterName parameter.
fromServerName name_of_server_on_which_CEI_is_installed
The name of the server on which CEI is currently installed.
If you specify this parameter, then you must also specify the fromNodeName
parameter. Do not specify this parameter if you have specified the
fromClusterName parameter.
toClusterName name_ of_ cluster_onto_which_you_are_moving_CEI
The name of the cluster onto which you are moving CEI.
Do not specify this parameter if you have specified the toNodeName or
toServerName parameters.
toNodeName name_of_node_onto_which_you_are_moving_CEI
The name of the node onto which you are moving CEI.
Optional parameters
None.
Examples
The following examples show how to use moveCEIServer.
Note: The examples show how to move a CEI server from server to server and
from cluster to cluster. However, it is also possible to move a CEI server from a
server to a cluster, or from a cluster to a server.
v Jython example:
The example uses moveCEIServer to move the CEI from Server1, to Server2:
AdminTask.moveCEIServer(’[-fromServerName Server1 -toServerName Server2
-fromNodeName Node1 -toNodeName Node2]’)
v Jacl example:
The following example uses moveCEIServer to move the CEI from an application
cluster (Cluster1), to a support cluster (Cluster2) in a multi-clustered WebSphere
ESB environment:
$AdminTask moveCEIServer { -fromClusterName Cluster1 -toClusterName Cluster2}
removeEventService command
Use the removeEventService command to remove the event service application
from your server.
Purpose
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
Parameters
- nodeName
The name of the node from which the event service should be removed. If this
parameter is specified, then the serverName parameter must be specified. You
must not specify this parameter if the clusterName parameter is specified.
Sample
AdminTask.removeEventService(’[-clusterName clustername]’)
v Using Jython list:
AdminTask.removeEventService([’-nodeName’, ’nodename’,
’-serverName’, ’-servername’])
AdminTask.removeEventService([’-clusterName’, ’clustername’])
removeEventServiceDB2DB command
Use the removeEventServiceDB2DB command to remove the event service and,
optionally, the associated DB2 event database.
Purpose
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
Sample
removeEventServiceDB2ZOSDB command
Use the removeEventServiceDB2ZOSDB command to remove the event service and,
optionally, the associated DB2 for z/OS event database.
Purpose
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
Parameters
- removeDB
The command removes the database when this parameter is set to true and
does not remove the database when set to false. To remove the database, the
current server must already configured to run the database commands.
- nodeName
The name of the node that contains the server where the event service data
source should be removed. If this parameter is specified, then the serverName
parameter must be set. You must not specify this parameter if the clusterName
parameter is specified.
- serverName
The name of the server where the event service data source should be
removed. If this parameter is specified without the nodeName parameter, the
command will use the node name of the current WebSphere profile. You must
not specify this parameter if the clusterName parameter is specified.
- clusterName
The name of the cluster where the event service data source should be
removed. If this parameter is specified, then the serverName and nodeName
parameters must not be set. You must not specify this parameter if the
serverName and nodeName parameters are specified.
Sample
removeEventServiceMdb command
Use the removeEventServiceMdb command to remove the event service message
driven bean from your server.
Purpose
The removeEventServiceMdb command is a Common Event Infrastructure
administrative command is available for the AdminTask object. Use this command
to remove the event service MDB from a server or cluster. For more information
about the AdminTask object, see the WebSphere Application Server Network
Deployment, version 6.1 documentation.
Parameters
- nodeName
The name of the node where the event service MDB should be removed. If this
parameter is specified, then the serverName parameter must be specified. You
must not specify this parameter if the clusterName parameter is specified.
- serverName
The name of the server where the event service MDB should be removed. If
this parameter is specified, then the serverName parameter must be specified.
You must not specify this parameter if the clusterName parameter is specified.
- clusterName
The name of the cluster where the event service MDB should be removed. You
must not specify this parameter if the nodeName and serverName parameters are
specified.
- applicationName
The name of the event service MDB application to be removed from a server
or cluster.
Sample
removeEventServiceOracleDB command
Use the removeEventServiceOracleDB command to remove the event service and,
optionally, the associated Oracle event database.
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
Parameters
- removeDB
The command removes the event service tables when this parameter is set to
true and does not remove the tables when set to false.
- nodeName
The name of the node that contains the server where the event service data
source should be removed. If this parameter is specified, then the serverName
parameter must be set. You must not specify this parameter if the clusterName
parameter is specified.
- serverName
The name of the server where the event service data source should be
removed. If this parameter is specified without the nodeName parameter, the
command will use the node name of the current WebSphere profile. You must
not specify this parameter if the clusterName parameter is specified.
- clusterName
The name of the cluster where the event service data source should be
removed. If this parameter is specified, then the serverName and nodeName
parameters must not be set. You must not specify this parameter if the
serverName and nodeName parameters are specified.
- sysUser
Oracle database sys user ID. The default value is sys if not specified.
- sysPassword
The password for the user specified by the sysUser parameter.
- dbScriptDir
The directory containing the database scripts generated by the event service
database configuration command. If specified, the command will run the
scripts in this directory to remove the event service database. The default
database script output directory is profile_root/databases/event/node/server/
dbscripts/oracle.
Sample
removeEventServiceSQLServerDB command
Use the removeEventServiceSQLServerDB command to remove the event service
and, optionally, the associated SQL Server event database.
Purpose
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
Parameters
- removeDB
The command removes the database when this parameter is set to true and
does not remove the database when set to false. To remove the database, the
current server must already configured to run the database commands.
- nodeName
The name of the node that contains the server where the event service data
source should be removed. If this parameter is specified, then the serverName
parameter must be set. You must not specify this parameter if the clusterName
parameter is specified.
- serverName
The name of the server where the event service data source should be
removed. If this parameter is specified without the nodeName parameter, the
command will use the node name of the current WebSphere profile. You must
not specify this parameter if the clusterName parameter is specified.
- clusterName
The name of the cluster where the event service data source should be
removed. If this parameter is specified, then the serverName and nodeName
parameters must not be set. You must not specify this parameter if the
serverName and nodeName parameters are specified.
Sample
removeNodeFromDeploymentEnvDef command
Use the removeNodeFromDeploymentEnvDef command to remove a node from the
existing deployment environment definition.
This command to remove a node from the deployment environment will fail if the
topology is already configured.
Required parameters
-topologyName name_of_topology
Specifies the name of the deployment environment from which you are
removing the node.
-nodeName nodeName
Specifies the name of the node you are removing.
Optional parameters
-topologyRole role_performed
Specifies the role (such as a particular cluster) from which the node will be
removed. If you do not specify a role, the node is removed from all roles in the
environment definition.
Valid values are as follows:
v ADT for a deployment target role
v Messaging for a host messaging role
v Support for a supporting services role
v WebApp for a web application infrastructure
You can indicate one value or more than one value, each separated by a space,
for example ADT Messaging Support or Messaging or ADT Support.
Examples
Note: The examples are for illustrative purposes only. They include variable values
and are not meant to be reused as snippets of code.
removeSCMConnectivityProvider command
Use the removeSCMConnectivityProvider command to remove a Service
Connectivity Management (SCM) connectivity provider.
Use the following command to list all the Service Connectivity Management
administrative commands.
v Using JACL:
$AdminTask help SCMAdminCommands
v Using Jython:
print AdminTask.help(’SCMAdminCommands’)
Syntax
>>-wsadmin-- --removeSCMConnectivityProvider--+ -name--+----------><
’-target-’
Purpose
Parameters
-name connectivityProviderName
The name of the connectivity provider to be removed, as a string.
target
The connectivity provider target object.
Example
Using JACL:
$AdminTask removeSCMConnectivityProvider {-name myProvider}
Using Jython:
AdminTask.removeSCMConnectivityProvider(’[-name myProvider]’)
renameDeploymentEnvDef command
Use the renameDeploymentEnvDef command to rename a deployment environment
definition.
You would typically run this command after importing an existing deployment
environment definition.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
-oldName current_name_of_deployment_environment_definition
Indicates the name of the deployment environment definition you are
renaming.
-newName new_name_of_deployment_environment_definition
Specifies the name for the deployment environment definition you are
renaming.
Optional parameters
None.
Note: The examples are for illustrative purposes only. They include variable values
and are not meant to be reused as snippets of code.
renameNodeInDeploymentEnvDef command
Use the renameNodeInDeploymentEnvDef command to rename a node across all roles
in an existing deployment environment definition.
You would typically run this command after importing a topology from another
deployment environment.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
-topologyName name_of_topology
Specifies the name of the deployment environment that contains the node you
are renaming.
-oldName name_of_node_to_be_renamed
Indicates the name of the node you are renaming.
-newName new_name_assigned_to_node
Specifies the new name you are assigning to the node.
Optional parameters
None.
Example
Note: The examples are for illustrative purposes only. They include variable values
and are not meant to be reused as snippets of code.
setBPMTargetSignificance command
Use the setBPMTargetSignificance command to set the target significance for all
the outbound activation specification and outbound connection factories configured
on the given cluster or server.
The target significance value determines how the applications interact with bus
members defined to a messaging engine.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Required parameters
clusterName clusterName
The name of the cluster on which you want to set target significance.
nodeName nodeName
The name of the node on which you are setting target significance.
If you specify this parameter, then you must also specify the serverName
parameter. Do not specify this parameter if you have specified the clusterName
parameter.
serverName serverName
The name of the server which you are setting target significance.
If you specify this parameter, then you must also specify the nodeName
parameter. Do not specify this parameter if you have specified the clusterName
parameter.
targetSignificance targetSignificance value
The value assigned to target significance. Valid values include:
v Required
For activation specifications, always set the significance to Required for best
efficiency.
A connection factory can be used to send or receive messages, and the
destination(s) it will use are not known at configuration time. If the
administrator knows that a particular connection factory will only be used to
consume messages then the same logic applies as above for activation
specifications, and a Required target should be set for the bus member that
localizes the destination
Optional parameters
printDetails true or false
The default value for this parameter is false. The value entered indicates
whether or not to print details for setBPMTargetSignificance command.
If -printDetails = true, the command prints the current setting for
targetSignificance used for all the activation specification and connection
factories configured on the given cluster or server.
Examples
Note: In the examples, the -printDetails property is set to true. If you do not want
to print the target significance settings for all the activation specification and
connection factories configured on the given cluster or server, set -printDetails to
false.
v Jython example:
The example uses setBPMTargetSignificance to set the target significance value
to Required for all the application resources deployed on a server (Server1)
nodeName (Node2).
AdminTask.setBPMTargetSignificance(’[-serverName Server1 -nodeName Node2
-targetSignificance Required -printDetails true]’)
v Jacl example:
The example uses setBPMTargetSignificance to set the target significance value
to Required for all the application resources deployed on a cluster (Cluster1):
$AdminTask setBPMTargetSignificance {-clusterName Cluster1
-targetSignificance Required -printDetails true}
setEventServiceJmsAuthAlias command
Use the setEventServiceJmsAuthAlias command to set or update the JMS
authentication alias associated with the event service on your server.
Purpose
Parameters
- nodeName
The name of the node where the event service JMS authentication alias should
be updated. If this parameter is specified, then the serverName parameter must
be specified. You must not specify this parameter if the clusterName parameter
is specified.
- serverName
The name of the server where the event service JMS authentication alias
should be updated. If this parameter is specified, then the serverName
parameter must be specified. You must not specify this parameter if the
clusterName parameter is specified.
- clusterName
The name of the cluster where the event service JMS authentication alias
should be updated. You must not specify this parameter if the nodeName and
serverName parameters are specified.
- userName
The name of the user to be used in the update of the event service JMS
authentication alias on a server or cluster.
Important: You must specify a valid user ID; this field cannot be empty.
- password
The password of the user to be used in the update of the event service JMS
authentication alias on a server or cluster.
Important: You must specify a valid password; this field cannot be empty.
Sample
setWSRRDefinitionAsDefault command
Use this command to set the named WSRR definition to be the default one.
Any WSRR definition in that cell that was previously set to be default will no
longer be the default. If no definition can be found with that name, or the target
object and name are both supplied but conflict, then an exception will be thrown.
Use the following command to list all the WSRR administrative commands.
$AdminTask help SIBXWSRRAdminCommands
Syntax
$AdminTask setWSRRDefinitionAsDefault {-paramName paramValue ...}
Required parameters
-name definitionName
The name of the WSRR definition, as a string.
Example
v Jython example:
AdminTask.setWSRRDefinitionAsDefault(’[-name mydefName]’)
v Jacl example:
$AdminTask setWSRRDefinitionAsDefault {-name mydefName}
showDeploymentEnvStatus command
Use the showDeploymentEnvStatus command to display the current status of the
deployment environment.
The deployment environment status is based on whether the environment has been
generated (exists) and whether components have been started.
Optional parameters
None.
Example
Note: The examples are for illustrative purposes only. They include variable values
and are not meant to be reused as snippets of code.
showEventServiceStatus command
Use the showEventServiceStatus command to display the event service status on
your server.
Purpose
Note: Vista Windows 7 The product uses a Jython version that does not
support Microsoft Windows 2003, Windows Vista, or Windows 7 operating
systems.
Parameters
- nodeName
Use this parameter to display only the status of the event services that belong
to the specified node. You must not specify this parameter if the clusterName
parameter is specified.
- serverName
Use this parameter to display only the status of the event services that belong
to the specified server. You can use this parameter with the nodeName
parameter to display the status of the event service belonging to the specified
node and server. You must not specify this parameter if the clusterName
parameter is specified.
Sample
AdminTask.showEventServiceStatus(’[-clusterName clustername]’)
v Using Jython list:
AdminTask.showEventServiceStatus([’-nodeName’, ’nodename’,
’-serverName’, ’-servername’])
AdminTask.showEventServiceStatus([’-clusterName’, ’clustername’])
showSCAExport command
Use the showSCAExport command to display the attributes of a Service Component
Architecture (SCA) module export.
Required parameters
-moduleName moduleName
SCA module name.
-export exportName
SCA module export name.
Optional parameters
-applicationName applicationName
The name of the application associated with the SCA module. Providing an
applicationName improves performance.
Example
showSCAExportBinding command
Use the showSCAExportBinding command to display the attributes of Service
Component Architecture (SCA) module export bindings.
Required parameters
-moduleName moduleName
SCA module name.
-export exportName
SCA module export name.
Optional parameters
-applicationName applicationName
The name of the application associated with the SCA module. Providing an
applicationName improves performance.
Example
To list the attributes of an SCA export binding called myExport in a module called
myModule:
v Jython example:
AdminTask.showSCAExportBinding(’-moduleName myModule
-applicationName myApplication
-export myExport’)
v Jacl example:
$AdminTask showSCAExportBinding {-moduleName myModule
-applicationName myApplication
-export myExport}
showSCAExportEJBBinding command
Use the showSCAExportEJBBinding command to show the attributes of an Enterprise
JavaBeans (EJB) export binding.
Optional parameters
-applicationName applicationName
The name of the application.
Example
showSCAExportHttpBinding command
Use the showSCAExportHttpBinding command to show the attributes of an HTTP
export binding.
Required parameters
–moduleName moduleName
The name of the SCA module associated with the export.
-export export
The name of the export.
Optional parameters
-applicationName applicationName
The name of the application.
-methodScope
The name of the method. If specified, the configuration properties for the
specified method scope are shown. If not, the binding scope is shown.
Example
Note: The list of attributes will vary, depending on the application type and
application version.
Required parameters
–moduleName moduleName
The name of the SCA module associated with the export.
-export export
The name of the export.
Optional parameters
-applicationName applicationName
The name of the application.
-javaFormat
The output format. Specify false for human-readable text or true for keys in
the following format:
v connection.factory
v response.connection.factory
v failed.event.replay.connection.factory
v send.destination
v receive.destination
v activation.specification
v listener.port
Note: The output varies, depending on whether the binding is JMS, Generic
JMS, or WebSphere MQ JMS. For example, for Version 7 applications,
information about listener ports is displayed only for Generic JMS bindings.
-showAdvanced
Specify true to display all the attributes, including the read-only ones.
Example
To list the attributes of a JMS export binding called Export1 in a module called
MyMod for use in another script:
v Jython example:
AdminTask.showSCAExportJMSBinding(’[-moduleName MyMod
-export Export1 -javaFormat true]’)
v Jacl example:
$AdminTask showSCAExportJMSBinding {-moduleName MyMod
-export Export1 -javaFormat true}
Required parameters
–moduleName moduleName
The name of the SCA module associated with the export.
-export export
The name of the export.
Optional parameters
-applicationName applicationName
The name of the application.
-javaFormat
The output format. Specify false for human-readable text or true for keys in
the following format:
v connection.factory
v send.destination
v listener.port
v callback.destination
v receive.destination
v activation.specification
Note: The output varies, depending on the version of the application. For
example, for Version 6 applications deployed to a Version 7 runtime
environment, information about listener ports is displayed.
-showAdvanced
Specify true to display all the attributes, including the read-only ones.
Example
showSCAExportWSBinding command
Use the showSCAExportWSBinding command to show the attributes of a Web service
export binding.
Optional parameters
-applicationName applicationName
The name of the application connecting to the export.
Examples
showSCAImport command
Use the showSCAImport command to display the attributes of a Service Component
Architecture (SCA) module import.
Required parameters
-moduleName moduleName
SCA module name.
-import importName
SCA module import name.
Optional parameters
-applicationName applicationName
The name of the application associated with the SCA module. Providing an
applicationName improves performance.
Example
The output of this command depends upon the type of binding. For example, for
an adapter (EIS) import binding, the output would be in the following format:
importBinding:type=AdapterImportBinding
Required parameters
–moduleName moduleName
The name of the SCA module associated with the import.
-import import
The name of the import.
Optional parameters
-applicationName applicationName
The name of the application.
Example
To list the attributes of an SCA import binding called Import1 in a module called
MyMod for use in another script:
v Jython example:
AdminTask.showSCAImportBinding(’-moduleName MyMod
-import Import1’)
v Jacl example:
$AdminTask showSCAImportBinding {-moduleName MyMod
-import Import1}
showSCAImportEJBBinding command
Use the showSCAImportEJBBinding command to show the attributes of an Enterprise
JavaBeans (EJB) import binding.
Required parameters
–moduleName moduleName
The name of the module associated with the import.
-import import
The name of the import.
Optional parameters
-applicationName applicationName
The name of the application connecting to the import.
Example
showSCAImportHttpBinding command
Use the showSCAImportHttpBinding command to show the attributes of an HTTP
import binding.
Required parameters
–moduleName moduleName
The name of the SCA module associated with the import.
-import import
The name of the import.
Optional parameters
-applicationName applicationName
The name of the application.
-methodScope
The name of the method. If specified, the configuration properties for the
specified method scope are shown. If not, the binding scope is shown.
Example
showSCAImportJMSBinding command
Use the showSCAImportJMSBinding command to show the attributes of a JMS import
binding. This applies to JMS bindings, WebSphere MQ JMS bindings, or Generic
JMS bindings.
Note: The list of attributes will vary, depending on the application type and
application version.
Required parameters
–moduleName moduleName
The name of the SCA module associated with the import.
Optional parameters
-applicationName applicationName
The name of the application.
-javaFormat
The output format. Specify false for human-readable text or true for keys in
the following format:
v connection.factory
v response.connection.factory
v failed.event.replay.connection.factory
v send.destination
v receive.destination
v activation.specification
v listener.port
Note: The output varies, depending on whether the binding is JMS, Generic
JMS, or WebSphere MQ JMS. For example, for Version 7 applications,
information about listener ports is displayed only for Generic JMS bindings.
-showAdvanced
Specify true to display all the attributes, including the read-only ones.
Example
To list the attributes of a JMS import binding called Import1 in a module called
MyMod for use in another script:
v Jython example:
AdminTask.showSCAImportJMSBinding(’[-moduleName MyMod
-import Import1 -javaFormat true]’)
v Jacl example:
$AdminTask showSCAImportJMSBinding {-moduleName MyMod
-import Import1 -javaFormat true}
showSCAImportMQBinding command
Use the showSCAImportMQBinding command to show the attributes of a WebSphere
MQ import binding.
Required parameters
–moduleName moduleName
The name of the SCA module associated with the import.
-import import
The name of the import.
Note: The output varies, depending on the version of the application. For
example, for Version 6 applications deployed to a Version 7 runtime
environment, information about listener ports is displayed.
-showAdvanced
Specify true to display all the attributes, including the read-only ones.
Example
showSCAImportWSBinding command
Use the showSCAImportWSBinding command to show the attributes of a Web service
import binding.
Required parameters
–moduleName moduleName
The name of the SCA module associated with the import.
-import import
The name of the import.
Optional parameters
-applicationName applicationName
The name of the application connecting to the import.
Examples
showSCAModule command
Use the showSCAModule command to display the attributes of a Service Component
Architecture (SCA) module.
This command displays the name and description of an SCA module, including
any associated process application or toolkit context.
Required parameters
-moduleName moduleName
SCA module name.
Optional parameters
-applicationName applicationName
The name of the application associated with the SCA module. Providing an
applicationName improves performance.
Example
The following command displays the attributes of a version 7 SCA module called
PAORDER-6.0DS-SCAM1. The module is part of a process application named
PAORDER, in a snapshot named 6.0S. The module was configured to run on the
business object framework version 7 with lazy business object loading.
v Jython example:
AdminTask.showSCAModule(’-moduleName PAORDER-6.0S-SCAM1’)
v Jacl example:
$AdminTask showSCAModule {-moduleName PAORDER-6.0S-SCAM1}
showSCAModuleProperties command
Use the showSCAModuleProperties command to display the properties of a Service
Component Architecture (SCA) module.
Required parameters
-moduleName moduleName
SCA module name.
Optional parameters
-applicationName applicationName
The name of the application associated with the SCA module. Providing an
applicationName improves performance.
-showPropertyTypes true/false
An indicator of whether to show module property data types. The default
value is false.
-groupName groupName
An indicator that only module properties that are members of the group
groupName should be displayed. The properties are displayed as a list of strings
in the form propertyName=value:type.
Example
This example displays the properties of MyModule. Property data types are not
shown, and the display is not restricted to any groups.
v Jython example:
AdminTask.showSCAModuleProperties(’-moduleName MyModule
-applicationName myApplication’)
v Jacl example:
$AdminTask showSCAModuleProperties {-moduleName MyModule
-applicationName myApplication}
showSCMConnectivityProvider command
Use the showSCMConnectivityProvider command to return a list of all the
parameters for a Service Connectivity Management (SCM) connectivity provider.
Syntax
>>-wsadmin-- --showSCMConnectivityProvider--+ -name--+---------><
’-target-’
Purpose
An exception is thrown if a connectivity provider is not found with the name that
you specify, or if the target object does not represent a connectivity provider.
Command name
showSCMConnectivityProvider
Target javax.management.ObjectName SCMConnectivityProvider – the connectivity
provider to be displayed.
Result Hashtable (Property=Value).
Parameters
-name connectivityProviderName
The name of the connectivity provider, as a string.
target
The connectivity provider target object.
Example
Using JACL:
$AdminTask showSCMConnectivityProvider {-name myProvider}
Using Jython:
AdminTask.showSCMConnectivityProvider(’[-name myProvider]’)
showWSRRDefinition command
Use this command to return a list of all the parameters for a WSRR definition,
including the type of the connection and whether the definition is the default.
An exception is thrown if the name you specify does not have a definition, or if
the target object and the name do not match.
Use the following command to list all the WSRR administrative commands.
$AdminTask help SIBXWSRRAdminCommands
Syntax
$AdminTask showWSRRDefinition {-paramName paramValue ...}
Required parameters
-name definitionName
The name of the WSRR definition, as a string.
Example
v Jython example:
AdminTask.showWSRRDefinition(’[-name mydefName]’)
v Jacl example:
$AdminTask showWSRRDefinition {-name mydefName}
startDeploymentEnv command
Use the startDeploymentEnv command to start the deployment environment.
This command starts all the clusters that are defined for the Deployment
Environment. For example, if you chose RemoteMessagingAndSupport as your
pattern, this command starts the three clusters (ApplicationTarget, Messaging, and
Support) that were created as part of that configuration.
Required parameters
-topologyName name_of_topology
Specifies the name of the deployment environment that you are starting.
Optional parameters
None.
Example
Note: The examples are for illustrative purposes only. They include variable values
and are not meant to be reused as snippets of code.
stopDeploymentEnv command
Use the stopDeploymentEnv command to stop the deployment environment.
This command stops all the clusters that are defined for the Deployment
Environment. For example, if you chose RemoteMessagingAndSupport as your
pattern, this command stops the three clusters (ApplicationTarget, Messaging, and
Support) that were created as part of that configuration.
Required parameters
-topologyName name_of_topology
Specifies the name of the deployment environment that you are stopping.
Optional parameters
None.
Example
Note: The examples are for illustrative purposes only. They include variable values
and are not meant to be reused as snippets of code.
validateDeploymentEnvDef command
Use the validateDeploymentEnvDef command to determine whether you have met
all the constraints for a deployment environment.
This command determines whether you have assigned all required functions for
the selected deployment environment. Run this command before generating a
deployment environment to make sure that the definition is valid.
After using the command, save your changes to the master configuration using one
of the following commands:
v For Jython:
AdminConfig.save()
v For Jacl:
$AdminConfig save
Optional parameters
None.
Examples
Deprecated commands
This section lists the commands that have been deprecated.
configSCAForCluster command
You can use this command instead of the administrative console to configure a
cluster to run Service Component Architecture (SCA) applications. You can specify
a number of commands in a file to batch a large number of configurations without
having to navigate the administrative console panels.
Purpose
Syntax
Parameters
-clusterName nameofCluster
A required parameter that identifies the cluster that you are configuring.
-systemBusDataSource SCASystemBusSource
An existing data source you are using for the SCA system bus.
-appBusDataSource SCAApplicationBusSource
An existing data source you are using for the SCA application bus.
Notes:
v When using a remote messaging engine, specify -remoteDestLocation; otherwise
specify -systemBusDataSource and -appBusDataSource.
v If you specify -systemBusDataSource and -appBusDataSource as the same data
store, then you must use different schemas for -systemBusSchemaName and
-appBusSchemaName.
Examples
$AdminConfig save
$AdminConfig save
configSCAForServer command
You can use this command instead of the administrative console to configure a
specific server to run Service Component Architecture (SCA) applications. You can
specify a number of commands in a file to batch a large number of configurations
without having to navigate the administrative console panels.
Syntax
Parameters
-serverName nameofserver
A required parameter that identifies the server you are configuring
-nodeName nameofnaode
A required parameter that identifies the node to which the server belongs.
-systemBusDataSource SCASystemBusSource
An existing data source you are using for the SCA system bus.
-appBusDataSource SCAApplicationBusSource
An existing data source you are using for the SCA application bus.
-remoteDestLocation remoteMELocation
The location of a remote messaging engine. Specify remoteMELocation in one of
the following ways:
v WebSphere:cluster=clustername
v WebSphere:node=nodeName,server=serverName
-meAuthAlias userid
An existing authentication alias used to access the messaging engine.
-systemBusSchemaName systemMEBusSchema
The schema name for the system bus messaging engine. The default for this
parameter is IBMWSSIB.
-appBusSchemaName appBusSchema
The schema name for the application bus messaging engine. The default for
this parameter is IBMWSSIB.
-createTables true | false
A parameter that specifies whether to create tables for the messaging engine
data store. The default value for this parameter is true.
Notes:
v When using a remote messaging engine, specify -remoteDestLocation; otherwise
specify -systemBusDataSource and -appBusDataSource.
v If you specify -systemBusDataSource and -appBusDataSource as the same data
store, then you must use different schemas for -systemBusSchemaName and
-appBusSchemaName.
$AdminConfig save
$AdminConfig save
Details
The WBIPostUpgrade script for WebSphere ESB reads the configuration from the
backupDirectory to migrate to WebSphere ESB V7.5.1 and adds all migrated
applications into the profile_root/installedApps directory for the new installation
of WebSphere ESB.
Location
The command file is located and should be run in the install_root/bin directory.
Syntax
WBIPostUpgrade.sh backupDirectory
[-username userID]
[-password password]
[-oldProfile profile_name]
[-profileName profile_name]
[-scriptCompatibility true | false]
[-portBlock port_starting_number]
[-backupConfig true | false]
[-replacePorts true | false]
[-keepAppDirectory true | false]
[-keepDmgrEnabled true | false]
[-appInstallDirectory user_specified_directory]
[-traceString trace_spec [-traceFile file_name]]
[-createTargetProfile]
WBIPostUpgrade.bat backupDirectory
[-username userID]
[-password password]
[-oldProfile profile_name]
[-profileName profile_name]
[-scriptCompatibility true | false]
[-portBlock port_starting_number]
[-backupConfig true | false]
[-replacePorts true | false]
[-keepAppDirectory true | false]
[-keepDmgrEnabled true | false]
[-appInstallDirectory user_specified_directory]
[-traceString trace_spec [-traceFile file_name]]
[-createTargetProfile]
Note: The -oldProfile parameter must precede the -profileName (new profile)
paramter.
Parameters
Note: This parameter is ignored for WebSphere ESB version 6.1.x to version
6.2.x migrations.
This is an optional parameter used to specify whether migration should create
the following version 6.0.2.x configuration definitions:
v Transport
v ProcessDef
v SSL for 6.0.2
instead of the following version 6.2 configuration definitions:
v Channels
v ProcessDefs
v SSL for V7.5.1
The default is true.
Specify true for this parameter in order to minimize impacts to existing
administration scripts. If you have existing wsadmin scripts or programs that
use third-party configuration APIs to create or modify the version 6.0.2.x
configuration definitions, for example, you might want to specify true for this
option during migration.
Note: This is temporary until all of the nodes in the environment are at the
version 6.2 level. When they are all at the new level, you should perform the
following actions:
1. Modify your administration scripts to use all of the version 6.2 settings.
2. Use the convertScriptCompatability command to convert your
configurations to match all of the version 6.2 settings.
For more information see the convertScriptCompatibility command .
-portBlock
This is an optional parameter. The port_starting_number value specifies the first
of a block of consecutive port numbers to assign when the command script
runs.
-replacePorts
This optional parameter is used to specify how to map port values for virtual
hosts and web-container transport ports.
v False
Do not replace the version 6.1.x or 6.0.2.x port definitions during migration.
– The previous version's configuration is left alone – no channels are
deleted.
– The following four named channels are set to values that are equivalent
to the values set for the previous release:
- WC_adminhost
- WC_defaulthost
- WC_adminhost_secure
- WC_defaulthost_secure
– The migration process creates transports or channels, based on the
-scriptCompatibility setting, for any ports in the previous release.
Note: This parameter is ignored for WebSphere ESB version 6.1 to version 6.2
migration.
This is an optional parameter used to specify whether to install all applications
to the same directories in which they are currently located. The default is false.
If this parameter is specified as true, each individual application retains its
location.
Note: This parameter is ignored for WebSphere ESB version 6.1 to version 6.2
migration.
This is an optional parameter used to specify whether to disable the existing
WebSphere ESB deployment manager. The default is false. If this parameter is
specified as true, you can use the existing deployment manager while the
migration is completed. It is only valid when you are migrating a deployment
manager; it is ignored in all other migrations.
Note: This parameter is ignored for WebSphere ESB version 6.1 to version 6.2
migration.
This is an optional parameter used to pass the directory name to use when
installing all applications during migration. The default of
profile_name\installedApps is used if this parameter is not specified.
Quotation marks must be used around the directory name if one or more
blanks are in the name.
Note: This parameter is ignored for WebSphere ESB version 6.0.2.x to version
6.2 migration.
This is an optional parameter. The value trace_spec specifies the trace
information that you want to collect. To gather all trace information, specify
"*=all=enabled" (with quotation marks).
Important: If you specify this parameter, you must also specify the -traceFile
parameter. If you specify the -traceString parameter but do not specify the
-traceFile parameter, the command creates a trace file by default and places it
in the backupDirectory/logs directory.
-traceFile
Note: This parameter is ignored for WebSphere ESB version 6.0.2.x to version
6.2 migration.
This is an optional parameter. The value file_name specifies the name of the
output file for trace information.
Logging
The WBIPostUpgrade command displays status to the screen while running. This
command also saves a more extensive set of logging information in the
WBIPostUpgrade.timestamp.log file located in the backupDirectory/logs directory.
You can view the WBIPostUpgrade.profileNametimestamp.log file with a text editor.
Details
The WBIPreUpgrade command saves selected files from the install_root and
profile_root directories to a backup directory you specify. The default for
profile_root is profiles/profile_name. The files copied will be in various
subdirectories, all copied by WBIPreUpgrade into the specified backup directory. In
addition, a logs subdirectory is created, which contains new log files
corresponding to the current instance of running the WBIPreUpgrade command.
Depending on which version you are migrating from, the WBIPreUpgrade command
backs up existing profiles in WebSphere Enterprise Service Bus either all at once or
one at a time.
v If you are migrating from version 6.0.2.x, the WBIPreUpgrade command backs up
the existing profiles in WebSphere Enterprise Service Bus all at once.
v If you are migrating from version 6.1, the WBIPreUpgrade command backs up
existing profiles one at a time, and only those profiles identified using the
-profileName parameter.
Restrictions
v If you are migrating from version 6.1.x, the WBIPreUpgrade command inherits the
following limitations from the WebSphere Application Server Network
Deployment, version 6.1 backupConfig utilities:
– By default, all servers on the node stop before the backup is made so that
partially synchronized information is not saved.
– You must have root authority to perform migration.
– In a UNIX or Linux environment, the backupConfig command does not save
file permission or ownership information.
– When restoring a file, the restoreConfig command uses the current umask
and effective user ID (EUID) to set the permission and ownership.
Location
The command file is located in, and should be run from, the install_dir/bin
directory.
Authority
Syntax
Note: If you are migrating from version 6.0.2.x to version 6.2, the -profileName
parameter is not supported.
Linux UNIX
WBIPreUpgrade.sh backupDirectory
currentWebSphereDirectory
[-profileName profile_name]
[-username userID]
[-password password]
[-traceString trace_spec [-traceFile file_name ]]
Windows
WBIPreUpgrade.bat backupDirectory
currentWebSphereDirectory
[-profileName profile_name]
[-username userID]
[-password password]
[-traceString trace_spec [-traceFile file_name ]]
Parameters
Note: To ensure that the correct profile is migrated, specify the profile name
using this parameter and do not rely on the default.
-traceFile
Note: This parameter is ignored for WebSphere ESB version 6.1.x to version
6.2 migration.
This is an optional parameter. The value file_name specifies the name of the
output file for trace information.
Important: If you specify this parameter, you must also specify the -traceString
parameter. If you specify the -traceFile parameter but do not specify the
-traceString parameter, the command uses the default trace depth, and stores
the trace file in the location you specify.
-traceString
Note: This parameter is ignored for WebSphere ESB version 6.1.x to version
6.2 migration.
This is an optional parameter. The value trace_spec specifies the trace
information that you want to collect. To gather all trace information, specify
"*=all=enabled" (with quotation marks).
Important: If you specify this parameter, you must also specify the -traceFile
parameter. If you specify the -traceString parameter but do not specify the
-traceFile parameter, the command creates a trace file by default and places it
in the backupDirectory/logs directory.
-username
This is an optional parameter that is required if administrative security is
configured in the previous version of WebSphere ESB. The value userID
specifies the administrative user name of the current WebSphere ESB (before
migration) installation.
The WBIPreUpgrade command displays status to the screen while it runs. It also
saves a more extensive set of logging information in the
WBIPreUpgrade.timestamp.log file written to the backupDirectory/logs directory,
where backupDirectory is the value specified for the backupDirectory parameter.
You can view the WBIPreUpgrade.profileName.timestamp.log file with a text editor.
Purpose
Syntax
profile_root/bin/ws_ant.sh -f install_root/util/WBIProfileUpgrade.ant
-DmigrationDir=backupDirectory [-Dcluster=clustername ]
Windows
profile_root\bin\ws_ant.bat -f install_root\util\WBIProfileUpgrade.ant
-DmigrationDir=backupDirectory [-Dcluster=clusterName ]
Parameters
profile_root
The directory in which the profile being migrated resides. The ws_ant
command resides in the bin subdirectory of this profile directory.
install_root
The directory in which WebSphere ESB is installed.
backupDirectory
This is a required parameter. The backupDirectory specifies the name of the
directory in which the migration-specific backup directory created by the
WBIPreUpgrade command resides.
clusterName
This specifies the name of the cluster that needs to be migrated.
The WBIProfileUpgrade script creates a log when it runs that is written to the
backupDirectory/logs/WBIProfileUpgrade.ant-profilename.time_stamp.log file,
where backupDirectory is the value specified for the backupDirectory parameter.
You can view the WBIProfileUpgrade.ant-profilename.time_stamp.log file with a
text editor
Examples
Note: The following examples are each single command lines but are shown on
multiple lines for readability purposes only.
Linux UNIX
/opt/IBM/Server1/profiles/DMgr/bin>ws_ant.sh –f
../../../util/WBIProfileUpgrade.ant –DmigrationDir=/tmp/migrationBackup
-Dcluster=clusterA
Windows
C:\IBM\Server1\profiles\DMgr\bin>ws_ant.bat –f
..\..\..\util\WBIProfileUpgrade.ant –DmigrationDir=c:\temp\migrationBackup
-Dcluster=clusterA
Introduction
WebSphere ESB supports the use of the WSRR product, which allows the storage
and retrieval of services endpoints and mediation policies.
The WSRR administration commands are run using the AdminTask object of the
wsadmin scripting client.
v Locate the command that starts the wsadmin scripting client: this is found in the
install_root\bin directory.
v Run the wsadmin command.
– If the server is not running, use the -conntype none option.
– If you are not connecting to the default profile, use the -profileName
profile_name option.
Use the following command to list all the WSRR administrative commands.
$AdminTask help SIBXWSRRAdminCommands
If these commands throws exceptions, they are of the type: public class
WSRRDefinitionException
For information about messages you can encounter while using WebSphere ESB,
see Reference > Messages in the information center on the Web at
http://www14.software.ibm.com/webapp/wsbroker/redirect?version=wbpm700
&product=wesb-dist&topic=welc_ref_msg_wbpm.
If you have installed a stand-alone profile, you have a single node in its own
administrative domain, known as a cell. Use the administrative console to manage
applications, buses, servers, and resources within that administrative domain.
Similarly, if you have installed and configured a network deployment cell, you
have a deployment manager node and one or more managed nodes in the same
cell. Use the administrative console to manage applications, set up managed nodes
in the cell, and monitor and control those nodes and their resources.
In the administrative console, task filters provide a simplified user experience and,
through the progressive disclosure of functions, access to the full underlying
WebSphere Application Server administrative capabilities.
This topic describes the main areas that are displayed on the administrative
console.
To view the administrative console, ensure that the server for the administrative
console is running. If you have configured a stand-alone server, the console runs
on that server. If you have configured a network deployment cell, the console runs
on the deployment manager server.
You can resize the width of the navigation tree and workspace simultaneously by
dragging the border between them to the left or the right. The change in width
does not persist between administrative console user sessions.
Taskbar
The taskbar offers options for logging out of the console, accessing product
information, and accessing support.
Navigation tree
The navigation tree on the left side of the console offers links to console pages that
you use to create and manage components in a cell.
Click a plus sign (+) next to a tree folder or item to expand the tree for the folder
or item.
Click a minus sign (-) to collapse the tree for the folder or item.
Click an item in the tree view to display its console page. This also toggles the
item between an expanded and collapsed tree.
Workspace
The workspace on the right side of the console contains pages that you use to
create and manage configuration objects such as servers and resources.
Click links in the navigation tree to view the different types of configured objects.
Click Welcome in the navigation tree to display the workspace home page, which
contains links to product-specific welcome pages for each product that you have
installed. Use these pages to see detailed information about each product.
The welcome page also provides a task filtering selector to help refine the
administrative console pages. Each filter provides a subset of administrative
console functionality pertinent to a particular set of tasks (for example, process
server administration or enterprise service bus administration).
Service applications provide services, and have an associated service module, also
called a Service Component Architecture (SCA) module.
The only type of SCA modules that are supported by WebSphere ESB are mediation
modules.
After you have deployed an enterprise archive (EAR) file containing an SCA
module, you can view SCA module details. You can list all your SCA modules, and
their associated applications, and you can view details about a particular SCA
module.
After you have deployed an EAR file containing an SCA module you can use the
administrative console to change the following SCA module details. You do not
need to redeploy the EAR file.
v Import bindings of type SCA:
– Import bindings define service interactions. You can change the bindings if
you want to change the service interactions.
– SCA bindings connect SCA modules to other SCA modules. One SCA module
can interact with a second SCA module, and can be changed to interact with
another SCA module.
– Web service bindings connect SCA modules to external services using SOAP.
v Import bindings of type Web service (WS):
– Import bindings define service interactions. You can change the bindings if
you want to change the service interactions.
– SCA modules use WS import bindings to access web services. A WS import
binding calls a service located at a specified endpoint. You can change the
end point so that the binding calls the service at an alternative end point, or
calls an entirely different service with compatible interfaces.
v Export and import bindings of types JMS, WebSphere MQ JMS, generic JMS,
WebSphere MQ, and HTTP have attributes that you can modify.
v Mediation module properties:
– Mediation module properties belong to the mediation primitives with which
they are associated. However, the administrative console displays some of
them as Additional Properties of an SCA module. The integration developer
must flag a mediation primitive property as Promoted in order for it to be
visible.
– Mediation module properties affect the behavior of your mediations. The
mediation changes that you can make depend upon the properties that have
been promoted.
MediationModule1 MediationModule2
Mediation Mediation
Export1 Primitive Import1 Export2 Primitive Import2
Web Web
Service Service
A B
Web Service SCA No Binding Web Service
Binding Binding Type Specified Binding
Figure 50. Example showing one mediation module interacting with another mediation
module. Mediation Module1 connects to Mediation Module2
Guided activities display each administrative console page that you need for a
specific task, surrounded by the following information to help you:
v An introduction to the task and its essential concepts
v A description of when and why to do this task
v A list of other tasks to do before and after the current task
v The main steps to complete during the task
v Hints and tips to help you avoid or recover from problems
v Links to field descriptions and extended task information in the online
documentation
Figure 51 shows an example of the administrative console displaying a guided
activity.
Navigation section
Guided activity list Result pane containing wizard panel
Collection pages
Detail pages
A detail page is used to view details about an object and to configure specific
objects (such as an application server or a listener port extension). It typically
contains one or more of the following elements:
Configuration tabbed page
This tabbed page is used to modify the configuration of an administrative
object. Each configuration page has a set of general properties specific to
the object. Additional properties can be displayed on the page, depending
on the type of administrative object you are configuring.
Changes to this tabbed page can require a server restart before they take
effect.
Runtime tabbed page
This tabbed page displays the configuration that is currently in use for the
administrative object. It can be read-only. Note that some detail pages do
not have runtime tabs.
Changes to this tabbed page take effect immediately.
Local topology tabbed page
This tabbed page displays the topology that is currently in use for the
administrative object. View the topology by expanding and collapsing the
different levels of the topology. Note that some detail pages do not have
local topology tabs.
Buttons for performing actions
Buttons to perform specific actions display only on configuration tabbed
pages and runtime tabbed pages. The typical buttons are described in
“Administrative console buttons” on page 480.
Wizard pages
The graphical buttons in Table 36 are located at the top of a table that displays
server-related resources:
Table 36. Graphical buttons at the top of a console collection page
Button Resulting action
Check all Selects each resource (for example, a failed event or a relationship instance) that
is listed in the table, in preparation for performing an action against those
resources.
Uncheck all Clears all selected resources so that no action is performed against them.
Show the filter view Opens a dialog box to set a filter. Filters are used to specify a subset of
resources to view in the table.
Hide the filter view Hides the dialog box used to set a filter.
Clear filter value Clears all changes made to the filter and restores the most recently saved
values.
For a complete list of buttons used in the administrative console to administer all
products and resources, refer to Administrative console buttons in the WebSphere
Application Server Information Center.
The following table lists, by component, the administrative console actions that
have command assistance.
Table 38. Available command assistance
Component Action
Business Rules v Install the business rules manager
– configBusinessRulesManager
Authentication aliases are used to authenticate runtime code for access to system
resources such as messaging engines and connection factories.
To view this administrative console page, click Security > Business Integration
Security. You must be a member of the administrator or configurator role in order
to make changes to the authentication alias configurations.
Select A check box which can be selected so that subsequent actions pertain to
the selected authentication alias.
Component
The name of the component for which system resources need user
authentication.
Alias The alias that is used to authenticate runtime code to the secured
component.
Referring Resources
A list of all system resource objects that use the given authentication alias.
Click on a resource to display the resource configuration panel.
User name
The user that is allowed to access the system resources for the given
component.
Password
The password with which the user is authenticated. The password is not
displayed on the administrative console.
Confirm Password
The same password must be supplied in the Confirm Password cell, or an
error will be reported.
Description
An optional text description of the nature of this authentication alias.
Edit:
Click the Edit button to open the authentication alias configuration panel, in which
you can edit various properties of the selected authentication aliases. Some of the
properties (the user name and password) can be edited directly on the referring
page and you do not need to access the alias configuration panel to change these
values.
Note: The Edit button is not shown when the entry in the Alias column has a link
that you can use to reach the configuration panel. In some cases, the Alias column
Reset:
The Reset button returns all fields to their values at the beginning of the current
session.
Authentication aliases are used to authenticate runtime code for access to system
resources such as messaging engines and connection factories.
To view this administrative console page, click Security > Business Integration
Security and then either click the authentication alias that you want to configure,
or select the check box associated with the authentication alias and click the Edit
button. You must be a member of the administrator or configurator role in order to
make changes to the authentication alias configurations.
Authentication alias:
Displays the name of the authentication alias that you are administering.
User name:
Password:
Enter the password with which the user is authenticated. The current user account
repository is used to authenticate the user with the given password.
Confirm Password:
Description:
An optional field in which you can enter a description of the authentication alias.
Component:
Displays the name of the component for which system resources need user
authentication.
You use the Business Space Configuration page to install the service, designate the
database schema name, and select a database for Business Space. In addition, you
Business Space is a common interface for application users to create, manage, and
integrate Web interfaces across the IBM Business Process Management portfolio.
Business users can use widgets to work with artifacts from IBM Business Monitor,
WebSphere Process Server, WebSphere Enterprise Service Bus, WebSphere Business
Compass, and WebSphere Business Services Fabric.
Use this page to configure Business Space on a server or cluster. Business Space is
a common interface for application users to create, manage, and integrate Web
interfaces across the IBM Business Process Management portfolio.
For servers: Click Servers > Server Types > WebSphere application servers >
name_of_server > Business Integration > Business Space Configuration.
For clusters: Click Servers > Clusters > WebSphere application server clusters >
name_of_cluster > Business Integration > Business Space Configuration.
Note: If Business Space has already been configured on the server or cluster, you
can view this page, but you cannot edit the fields.
Before you begin: You must have completed the following steps before using the
Business Space Configuration page.
v Configure Representational State Transfer (REST) service endpoints. If you have
a stand-alone server environment or you are using the Deployment Environment
wizard to configure your runtime environment, the REST service endpoints are
configured and enabled automatically. For other environments, use the REST
Services administrative console page to configure the REST service endpoints. If
you want widgets to be available in Business Space, you must configure the
REST service endpoints for those widgets. For the widgets to be available in
Business Space and appear in the palette for use, you must register the REST
service endpoints needed by the widgets using REST service endpoint
registration administrative console page.
v If you want to configure Business Space on a server or cluster using a different
data source than the product data source: Create the data source in the server or
cluster scope with the correct JNDI name of jdbc/mashupDS first.
v For Oracle, to use a different schema for the Business Space tables than the one
used by the product database, complete the following steps to create a data
source manually before you open the Business Space Configuration page:
– Create the schema using the database product software.
– Use the administrative console to configure the JDBC provider.
– Use the administrative console to create a data source with the JNDI name of
jdbc/mashupDS at the server or cluster scope, depending on your
environment.
Before you begin using Business Space, you must complete additional steps:
v Run a script to create tables in the database. The scripts were generated when
you completed the configuration. By default, the scripts are located in the
Select this check box to configure Business Space for your server or cluster.
Type the name of the database schema you want to use for Business Space. In
Oracle, the schema is the same as the user name set on the authentication alias on
the data source.
This field lists the data source that is designated for use with Business Space.
If no data source is designated in the Existing Business Space data source field,
select the data source that you want to copy and use with Business Space.
Click this link to manage the registration of REST service endpoints to the proper
cluster or server for widgets you are using in Business Space.
Use this page to register Representational State Transfer (REST) service endpoints
with Business Space for the REST services that are configured in your cell. For the
REST services types that have a deployment target (server or cluster) scope, use
this page to register the REST Service instance on the deployment target that
provides the correct data set that you want your widgets to present. For REST
service types that have a cell scope, all of the REST service instances have the same
data scope, and you can use this page to register the REST service instance on the
deployment target that gives the best performance and availability.
To view this administrative console page, use one of the following paths:
For servers: Click Servers > Server Types > WebSphere application servers >
servername > Business Integration > Business Space Configuration > System
REST service endpoint registration.
For clusters: Click Servers > Clusters > WebSphere application server clusters >
clustername > Business Integration > Business Space Configuration > System
REST service endpoint registration.
The REST service endpoint registration table lists all REST service endpoints that
are configured and enabled with your product. The Type column is read-only.
In this column, select the server or a cluster where the REST service with the
correct data scope needed by your widgets is configured. The endpoint of this
REST service is registered with Business Space. If you are using Human Task
Management widgets, in the row for the Process Services and Task Services types,
more than one REST service provider is available to select for a server or a cluster.
Select the provider with Name=Federated REST Services, the provider with
Name=Business Process Choreographer REST services, or the provider with
Name=BPD engine REST services. If you have tasks and processes running in both
Business Process Choreographer and the business process definition (BPD) engine,
select the federated REST services. If you are using only processes and tasks that
are running in the Business Process Choreographer (modeled in Integration
Designer), select the Business Process Choreographer REST services. If you are
using only processes and tasks that are running in the BPD engine (modeled in
Process Designer), select the BPD engine. If you do not specify the target, the REST
endpoint of this type is not registered with Business Space, and any widgets that
need the REST service endpoint of this type will not be visible in Business Space.
For REST service types that have a cell scope, all the REST service instances have
same data scope, and you can select a cluster or server for better performance or
availability.
Enabled:
In this column, the check box is selected if the REST service is enabled. This
column is read-only. You enable REST services on the REST Services administrative
console panel.
URL:
This column lists the full URL path for the REST endpoints. This field is read-only.
It is based on what you selected in Service Endpoint Target.
Use the Common Base Event Browser to retrieve and view events in the event
database.
Note: These two components are always visible in the Common Base Event
Browser panel.
3. The other frame will initially contain the “Get Events” subpanel. It is here that
you will specify the events to be retrieved from the Common Event
Infrastructure (CEI) event database. After you have submitted your query and
the events have been retrieved, you can click one of the Event Views to replace
the Get Events subpanel in this frame.
The “Event Data Store Properties” are required for events gathering. The “Event
Filter Properties” are optional, but let you to refine your search.
To view this page in the administrative console, click Integration Applications >
Common Base Event Browser. Because this panel contains a large amount of
information, you may want to open it in a separate web browser window. To do
this, right-click on Common Base Event Browser menu item, and click Open in
New Window.
Get Events:
Event Data Store Properties
Event Data Store
The JNDI name of the database used by the CEI to store the emitted events.
You can select the name from the list, or select “Input your JNDI name here”
from the list type the name of another datasource in the field below.
Event Group
The name of the event group that defines a list of events to be filtered through
event selector expressions. JMS queues and a JMS topic can be associated with
each event group. If event distribution is enabled and an event matches an
event group, the event is distributed to any topic or queues configured for that
particular event group. The default is All events. You can, however, define
your own event groups or use the system defined event groups to filter events
to be retrieved by the Common Base Event Browser.
Maximum Number of Events to Retrieve
The maximum number of events to retrieve from the CEI (default = 500)
Get Events button: When all the required fields, and any optional fields, have been
entered, click Get Events to retrieve the specified events. The Get Events panel will
reappear, and the number of events returned by the query will be shown beneath
the Events View list.
Make a selection from the Events View list, to retrieve “All Events,” “BPEL Process
Events” on page 491, “User Data Events” on page 492, or “Server Events” on page
493 from among the events returned.
All Events:
The All Events view displays those events retrieved from the event database, based
on the specific search criteria were set in the Get Events subpanel.
The All Events view is divided into a list and a details frame. The list frame
contains the list of events retrieved, with several columns containing corresponding
attributes for a particular event. To view a particular event, click the link with the
time and date the event occurred, or select one of the events listed (click on the
radio button next to an event in the Select column). The property Names and
Values for the chosen event will be separately displayed in the details frame at the
bottom of the view.
You can sort the events in ascending or descending order for a single column by
pressing the arrow next the column title. There are also several buttons and a
pull-down menu where you can perform more advanced functions on some or all
of the events:
Show/Hide Filter Row
To show the filter row, you press the Show Filter Row button or select the
same action from the pull-down menu. You will now see that a blue box with
a link to the filter function appears beneath each column name. For every
column type (except “Select”), you can click the filter link, and you will see the
following items:
v A pull-down menu labeled Condition.
v A text-entry box.
Note: The blue boxes with the filter links will continue to appear beneath the
column names. Click Hide Filter Row to remove these.
Edit Sort
You perform this action to open the sort criteria panel. From here, you can sort
up to three columns in order of preference (First, Second, and Third). You will
then specify for each column to be sort in either ascending or descending
order. Press OK to preform the sort, or Cancel to close the sort criteria panel
without sorting the events.
Clear All Sorts
Select this to remove all filter criteria, and return the list to its original state.
Collapse/Expand Table
You perform this action to show or hide the list of events in the table.
Enable/Disable Inline Action Bar
This action will cause an Inline Action Bar (similar to the column names and
pull-down menu at the top of the table) to appear above a selected event.
Note: You can only select and view the contents of a single event inWebSphere
ESB. Consequently, the only action you can perform with this function is to
view the event data or disable the Inline Action Bar.
Configure Columns
Use this function to select which columns (other than “Select”"), and the order
of those columns, to show in the table. The panel for this function will show a
list of available columns (these vary between different Event Views). To
configure which columns, and the order they will appear on the table, you
perform these actions:
v Select the boxes next to each you use to select the column(s) that you want
to see in the list.
v Highlight a column, and use the up/down buttons next to the list to move
the column position in the list (do this for each column you want to move).
You can view the payload (in XML format) of any given event by clicking the link
associated with the payload element name in the details panel. The link itself will
show the first 100 characters of the payload content. For example, if the payload
element name is wbi:event then the link in the next column can be clicked to open
a window showing all of the XML elements in the payload.
Related information:
“Common Base Event Browser” on page 487
Use the Common Base Event Browser to retrieve and view events in the event
database.
“BPEL Process Events”
The BPEL Process Events panel displays all events for a specific BPEL process
instance.
“User Data Events” on page 492
The User Data Events panel displays the requested number of events classified by
the UserDataEvent property of the ECS Emitter class.
“Server Events” on page 493
The Server Events panel displays all events for the server named in the Get Events
panel.
The BPEL Process Events panel displays all events for a specific BPEL process
instance.
Click BPEL Process Events from the Event Views list to open this view. You will
notice that the list panel for this view is split into three subpanels:
1. Process
2. Instance
3. Event table
You must first select a process to populate the instance list. You then click on an
instance to populate the events list. You can only perform the filter and sort
“Advanced table functions” on page 489 on the process and instance lists; all of
those functions are available for the event table. Valid columns for the event list
are:
v Creation Time
v Extension Name
Chapter 8. User interfaces 491
v Event Code
v Process Name
v Process Execution State
v Process Username
v Activity Name
v Activity Kind
v Activity State
v Link Name
v Variable Name
After you have populated the event table you can select an event (click on the
radio button next to an event in the Select column) to display the event data. The
property Names and Values for the chosen event will be separately displayed in
the details frame of the view.
You can view the payload (in XML format) of any given event by clicking the link
associated with the payload element name in the details panel. The link itself will
show the first 100 characters of the payload content. For example, if the payload
element name is wbi:event then the link in the next column can be clicked to open
a window showing all of the XML elements in the payload.
Related information:
“Common Base Event Browser” on page 487
Use the Common Base Event Browser to retrieve and view events in the event
database.
“All Events” on page 489
The All Events view displays those events retrieved from the event database, based
on the specific search criteria were set in the Get Events subpanel.
“User Data Events”
The User Data Events panel displays the requested number of events classified by
the UserDataEvent property of the ECS Emitter class.
“Server Events” on page 493
The Server Events panel displays all events for the server named in the Get Events
panel.
The User Data Events panel displays the requested number of events classified by
the UserDataEvent property of the ECS Emitter class.
Select User Data Events from the Events View list to open this view. As in the All
Events view, all User Data Events are listed in the list frame of the panel, and all of
the “Advanced table functions” on page 489 are available for the event list. Valid
columns for the list are as follows:
v Creation Time Server
v Name
v Message
v Priority
v Severity
v Sub-component
v Situation
v Application
You can view the payload (in XML format) of any given event by clicking the link
associated with the payload element name in the details panel. The link itself will
show the first 100 characters of the payload content. For example, if the payload
element name is wbi:event then the link in the next column can be clicked to open
a window showing all of the XML elements in the payload.
Related information:
“Common Base Event Browser” on page 487
Use the Common Base Event Browser to retrieve and view events in the event
database.
“All Events” on page 489
The All Events view displays those events retrieved from the event database, based
on the specific search criteria were set in the Get Events subpanel.
“BPEL Process Events” on page 491
The BPEL Process Events panel displays all events for a specific BPEL process
instance.
“Server Events”
The Server Events panel displays all events for the server named in the Get Events
panel.
Server Events:
The Server Events panel displays all events for the server named in the Get Events
panel.
Select Server Events from the Events View list to open this view. The list frame is
divided into two subpanels:
1. Server (which lists all of the servers for which events can be retrieved)
2. Events list
Click on a server name to display all associated events. You can only perform the
filter and sort “Advanced table functions” on page 489 on the server list; all of
those functions are available for the event table. Valid columns for the events list
are:
v Creation Time
v Name
v Message
v Priority
v Severity
v Server
v Sub-component
v Situation
v Application
Select one of the events listed (click on the radio button next to an event in the
Select column). The property Names and Values for the chosen event will be
separately displayed in the details panel of the view.
Use the console pages described in the following topics to configure the
deployment of a CEI application, the database used by CEI, and the database used
for the CEI messaging infrastructure.
Use this panel to enable the Common Event Infrastructure server and to configure
the Common Event Infrastructure Bus Member.
Important: Although you can modify the data sources used for the CEI
configuration in deployment manager or custom profiles, you cannot modify (or
unconfigure) existing CEI support. In stand-alone server profiles, you cannot
modify the configured data sources, either.
Specifies the location / destination of the Common Event Infrastructure Bus. The
destinations can be the local deployment target or on a remote target. Use the
Local and Remote radio buttons in this panel to indicate the appropriate location.
Bus members are always hosted locally for stand-alone server profiles.
Local:
Select this radio button if the Bus Member will be configured on this server or
cluster.
Remote:
Select this radio button if the Bus Member will be configured on another server or
cluster (not the server or cluster you are currently configuring).
If you select Remote, use the associated drop-down list to specify the remote
location you want to use. The drop-down list shows all deployment targets.
Specifies the database you will use for the messaging infrastructure by the CEI
service on your server. You will initially see the default properties for the CEI
messaging infrastructure database fields, but you can make changes to the
properties by editing the corresponding fields in the table. You can also make
changes to these and other database properties by clicking the Edit... button, which
will open a separate database configuration panel. When you are finished making
changes on that panel, click OK and you will see the updated properties in the CEI
messaging infrastructure database fields.
Note: If the CEI was configured as part of a stand-alone server profile, then bus
members are always hosted locally and you cannot edit these fields.
This service enables applications and clients to create and manage events. The
Common Event Infrastructure serves as an integration point for consolidation and
persistence of business events from multiple, heterogeneous sources and for
distribution of those events to event consumers.
Use this page to configure the runtime properties of the Common Event
Infrastructure service.
To view this administrative console page, perform one of the following steps:
Important: When security and role-based authorization are enabled, you must be
logged in as an administrator or a configurator to perform this task.
Specifies whether the application server starts the Common Event Infrastructure
service and server automatically.
Table 41. Enable service at server startup
Interface descriptor Interface value
Data type Check box
Default Selected
Range Selected
Unselected
Use the Browse Deployment Target page to select a new server or cluster as a
remote deployment target.
To view this page in the administrative console, click New on the Service
Component Architecture page, the Common Event Infrastructure Server page, or
the Business Process Choreographer Configuration page.
The table on the Browse Deployment Target page contains a list of all servers and
clusters that are available as deployment targets for the component you are
configuring. Eligible servers and clusters are those that do not already have remote
bus members configured and that do not host local bus members.
To select one of these deployment targets for your configuration, click the radio
button associated with the target name, and then click Select.
Use the Business Integration Data Sources page to review and edit the configured
business integration data sources within your system.
To view this administrative console page, click Resources > JDBC > Business
Integration Data Sources.
Required security role for this task: When security and role-based authorization
are enabled, you must be logged in as an administrator or a configurator to
perform this task.
The Business Integration Data Sources page lists the configured data sources
needed by business integration applications in your system. For example, the page
lists data sources for the following components:
v (Business Process Manager only) Business Process Choreographer
v Service Component Architecture
v Common Event Infrastructure
If the edits conflict with another data source, a warning message shows. You can
still save the information, but the warning message will persist until the conflict is
resolved.
The table lists the key parameters for the data source configuration:
Select
Select the check box in this column to select the row that contains a data
source. Table actions such as Reset or Test Connection occur only on selected
rows unless otherwise stated. The select column does not show if there is only
one row in the table.
Component
Shows the component name that needs the configured data source.
Data Source
Click the data source link to open the data source detail page.
JNDI Name
Identifies the data source's JNDI name.
Scope
Identifies if the scope for a data source is at cell, cluster, or server level. The
server scope is fully qualified with its node name. For example,
NodeA/TestServer is a fully qualified server scope.
Database name:
Identifies the schema name of a database. The schema name is dependent on the
database provider and on the component that uses the data source. For
components and database providers that have dynamic schema support, a unique
schema will be generated during configuration.
For all other database provider types that have dynamic schema support:
v Messaging engines for Service Integration Buses supports dynamic schemas. The
schema name is saved in the messaging engine's message store configuration.
v (Business Process Manager only) Business Process Choreographer container
supports dynamic schemas. The schema name is stored in the Business Process
Choreographer configuration.
v (Business Process Manager only) Business Process Choreographer event collector
supports dynamic schemas. The schema name is stored in the BPEL process
event collector's application deployment descriptors.
v Common Event Infrastructure event database does not support dynamic
schemas. The field is unavailable.
Table 45. Schema
Property Value
Data type String
Default Component and database dependent
Create Tables
Select this check box to allow the component to create the tables the first time
it accesses the data source. If your site policy restricts table creation to a
database administrator, deselect the check box, and click on the data source link
to locate the database script locations.
The Common Event Infrastructure (CEI) component creates the tables when
you configure the CEI server. Once you configure the CEI component, the
create table flag is unavailable and you cannot configure the table again.
Important: All tables must be created before any component attempts to access
the database.
User name:
Identifies a user name that can authenticate with the database. Ensure that the user
has been granted all access rights for the intended operations. If you need the user
to create schemas and tables, ensure that the user has the appropriate
authorizations needed to create these schemas and tables.
Table 46. User name
Property Value
Data type String
Default None
Password:
Server:
Provider
Identifies the database vendor or file store. If a component has not been
configured, a drop down list allows the user to choose a database vendor. The
file store option is only available when installing a stand-alone server. Once
you configure the component, you cannot edit the provider field.
Description
Provides a description of the business integration data source.
Data Source:
Use the Data Source configuration page to edit the business integration data source
properties.
To view this administrative console page, click Resources > JDBC > Business
Integration Data Sources > Data Source.
Required security role for this task: When security and role-based authorization
are enabled, you must be logged in as an administrator or a configurator to
perform this task.
Use the Data Source configuration page to review and edit existing business
integration data sources. This page has the following configuration sections:
Data Source Provider
Shows the selected provider name and the implementation type. The
implementation type can be XA or Connection Pool.
Scope
Shows the scope at which the data source is configured. The scope can be cell,
cluster, node or server.
Data Source Properties
The data source properties are specific to each selected database provider.
Component Specific Properties
These fields are determined by the component being configured. Typically, the
component needs to set the JNDI name and usually the data source name in
order to find the data source. It holds a component specific description for the
use of this data source and the component name that is using the data source.
New data sources cannot be created on this page. Data sources can only be created
on a component specific configuration page. The component specific configuration
page sets all the required fields.
The fields shown in the Data Source configuration page depend on the type of
database provider used. This page shows what a typical DB2 configuration page
would look like. If you choose a different database provider, some of the fields
may be different or missing particularly in the Component Specific Properties
section.
Provider:
Identifies the database provider or file store. The provider or file store cannot be
changed after it has been configured.
Table 49. Provider
Property Value
Data type String
Default None
Implementation Type:
Scope:
Shows the scope at which the data source is configured. The scope can be cell,
cluster, node or server. For example, NodeA/TestServer is a fully qualified server
scope. You cannot edit the scope once it is configured.
Table 51. Scope
Property Value
Data type String
User Name:
Password:
Database Name:
Schema Name:
Identifies the schema name used to access this configured data source.
Table 55. Schema name
Property Value
Data type String
Server Name:
Specifies the database listener port used for all communications involving this data
source.
Table 57. Service port number
Property Value
Data type Integer
Specifies the driver type for the database provider. See “JDBC specifications” for
more information on driver types.
Table 58. Driver type
Property Value
Data type Integer
Component:
Displays the name of the component that requires the data source.
Table 59. Component
Property Value
Data type String
Specifies the data source name used to uniquely identify a data source
configuration. This name corresponds to WebSphere Application Server JDBC data
source name.
Table 60. Data source name
Property Value
Data type String
Default None
JNDI Name:
Specifies the JNDI name used by the component to locate the data source. This
field is configured by the component to ensure that the data source can be found.
It is not editable.
Table 61. JNDI name
Property Value
Data type String
Default None
Description:
Create Tables:
The Common Event Infrastructure (CEI) component creates the tables when you
configure the CEI server. Once you configure the CEI component, the create table
flag is unavailable and you cannot configure the table again.
Important: All tables must be created before any component attempts to access the
database.
Table 63. Create tables
Property Value
Data type Boolean
Default See “Table and schema creation matrices”.
Use the Database Provider Configuration page to review and edit the properties of
the JDBC database provider.
To view this administrative console page, click Resources > JDBC > Business
Integration Data Sources > Select > Edit Provider
Required security role for this task: When security and role-based authorization
are enabled, you must be logged in as an administrator or a configurator to
perform this task.
The Database Provider Configuration page shows the properties on the JDBC
database provider level. After you configure the database provider, there are some
properties you cannot edit such as Provider and JDBC Provider Name.
Provider:
Identifies the JDBC provider name which is WebSphere Application Server JDBC
provider template name for the corresponding provider. This name cannot be
edited.
Table 65. JDBC Provider Name
Property Value
Data type String
Default none
Implementation Type:
Scope:
Identifies if the scope for a data source is at cell, cluster, or server level. The server
scope is fully qualified with its node name. For example, NodeA/TestServer is a fully
qualified server scope. You cannot edit the scope once it is configured.
Table 67. Scope
Property Value
Data type Boolean
Default Cell
Select Node:
Specifies the node to which you are configuring the provider specific driver paths.
Table 68. Select Node
Property Value
Data type String
Specifies the database provider's driver paths. The list of the driver paths that need
to be configured depend on the database provider.
Table 69. Driver Paths
Property Value
Data type String
Default Dependent on the database provider.
Use the console pages described in the following topics to configure the
deployment of a CEI application, the database used by CEI, and the database used
for the CEI messaging infrastructure.
Use this page to configure the Common Event Infrastructure (CEI) components,
including the database, messaging engines, and deploy the server application.
To view this administrative console page, click Servers > Server Types >
WebSphere application servers > server_name > Common Event Infrastructure
Server or Servers > Clusters > WebSphere application server clusters >
cluster_name > Common Event Infrastructure Server.
You would use this panel to configure the three main CEI components::
v The database used by the CEI.
v The database used for the CEI messaging engine.
v The deployment of the CEI application
Specifies the database you will use to store events emitted by the CEI service on
your server. You can expand or collapse this section using the arrow icon.
Table 70. CEI database fields
Property Value
Database Provider pull-down menu Database provider.
Database name Name of the database you are using for storing
CEI-emitted events.
User name User ID required by the database.
Password Password for the user ID required by the database.
Confirm Password Re-enter the password for the user ID required by the
database.
Override data source v Checked – overrides the default CEI database.
v Not checked – will not override the CEI database; you
must manually specify the database if you are using
database other than Derby.
Create Database v Checked – automatically creates tables in the CEI
database.
v Not checked – will not create the database tables; you
must manually create the tables if they have not
already been created on the database.
Specifies the database you will use to store events emitted by the CEI service on
your server. You can expand or collapse this section using the arrow icon.
Table 72. CEI database fields
Property Value
Use a remote destination location Radio button you would select if you are using a remote
messaging database for CEI. Use the pulldown menu to
select the remote destination.
Configure a destination location locally Radio button you would select if you are using a local
messaging database for CEI.
Use Default Data Store v Checked – automatically configures the CEI messaging
database using Derby.
v Not checked – create the CEI messaging database on a
database system of your choice.
This service enables applications and clients to create and manage events. The
Common Event Infrastructure serves as an integration point for consolidation and
persistence of business events from multiple, heterogeneous sources and for
distribution of those events to event consumers.
Use this page to configure the runtime properties of the Common Event
Infrastructure service.
To view this administrative console page, perform one of the following steps:
v Click Servers > Server Types > WebSphere application servers > server_name >
Common Event Infrastructure Server > Common Event Infrastructure
Destination.
v Click Servers > Clusters > WebSphere application server clusters >
cluster_name > Common Event Infrastructure Server > Common Event
Infrastructure Destination.
Important: When security and role-based authorization are enabled, you must be
logged in as an administrator or a configurator to perform this task.
Specifies whether the application server starts the Common Event Infrastructure
service and server automatically.
Table 73. Enable service at server startup
Interface descriptor Interface value
Data type Check box
Default Selected
Unselected
Search Results:
Use the Search Results page to view a list of failed events on a standalone server
or on all the servers in a cell. The list can include either all failed events or a
subset of failed events that have been retrieved during a criteria-based search.
To access this page in the console, click Integration Applications > Failed Event
Manager > Get all failed events or perform a criteria-based search.
Failed events are displayed in a table, along with the following information. Note
that not all information is relevant for all event types; when an event is not
associated with a particular type of information, the table cell is blank.
v Event ID: The unique ID for the event.
v Event type: The type of failed event. Event types include SCA, JMS, and
WebSphere MQ. If you are using Business Process Manager, failed event types
also include Business Flow Manager hold queue and Business Process
Choreographer.
v Module: The module designated to receive the event.
v Component: The component designated to receive the event.
v Operation: The method designated to process the event.
v Failure time: The time the event failed. Note that the time is local to the machine
on which the process server is running.
v Event status: The status of the event. SCA, JMS, and WebSphere MQ, events
always have a status of failed. If you are using Business Process Manager, hold
queue events always have a status of failed, and Business Process
Choreographer events can be in the Failed, Stopped, or Terminated state.
v Event qualifier: The type of qualifier associated with the failed event. Events can
have one or more of the following qualifiers:
– Sequenced: The event is part of an event sequence. This qualifier requires that
the event order be kept when processing events. If the ContinueOnError
attribute of the event sequence qualifier is set to false, no dependent events
are processed until the failure is resolved.
– Store initiator: The event initiated event storing. Follow-up events for the
same event destination will be stored based on the destination's availability.
Events can be forwarded when the destination becomes available.
Click the up or down arrows in the title of any column to sort the contents of that
column in either ascending or descending order.
Button Function
Refresh Refreshes the current display.
Get all Retrieves and displays all failed events in the cell.
New search Opens the Search page so that you can perform a
criteria-based search for a subset of failed events.
Resubmit Resubmits one or more failed events.
For each failed event that you want to resubmit, click the
check box in the Select column. The Resubmit button
works only on selected failed events.
Resubmit with trace Resubmits one or more failed SCA events with trace
enabled.
For each failed event you want to delete, click the check
box in the Select column. The Delete button works only
on selected failed events.
Delete expired events Deletes all expired failed events.
View data for one failed SCA event and perform other tasks such as deleting,
resubmitting, modifying, and setting the trace for the event.
To view this page in the administrative console, click Integration Applications >
Failed Event Manager , then perform a search for failed SCA events. After failed
events are returned, click the name of an event that is listed in the Search Results
page.
The Failed Event details page provides the event ID, session ID and event
qualifiers associated with the failed event, as well as information about the event's
source, destination, time of failure, and cause of failure.
You can also perform the following tasks from this page:
v Set or modify an expiration time for the failed event. Failed events that are
resubmitted after this time will not be processed.
v Set tracing for the failed event.
v Resubmit the failed event.
Event type:
This field displays the type of event that failed. The value is SCA for all failed SCA
events.
The event type is assigned automatically by the Recovery subsystem; you cannot
edit it.
Event status:
This field displays the status of the failed event. For SCA events, the only available
status type is Failed.
The event status is assigned automatically by the Recovery subsystem; you cannot
edit it.
Event ID:
This field displays the unique ID for the failed event. This ID persists even after
the event is resubmitted; if the resubmission fails, the event is returned to the
failed event manager with the same event ID.
The event ID is assigned automatically by the Recovery subsystem; you cannot edit
it.
Session ID:
This field displays the unique session ID for the failed event.
Every event executes in a session; the session includes all of the information that is
needed to process an event. If an event fails, the failed event manager encapsulates
specific session information for the failed execution branch in the Session ID
parameter.
Note that any Common Base Events and BPEL process instances that are related to
a particular failed event have the same session ID, which makes them easy to
identify and examine for more information about the failure.
Interaction type:
This field displays the type of service invocation between SCA components. The
three supported invocation models are asynchronous request-deferred response,
asynchronous request with callback, and asynchronous one-way.
This field displays the name of the module from which the event originated.
Source component:
This field displays the name of the component from which the event originated.
Destination module:
This field displays the name of the destination module for the event (where the
event was going when it failed).
Destination component:
This field displays the name of the destination component for the event (the
component to which the event was going when it failed).
Destination method:
This field displays the name of the destination method for the event.
Failure time:
This field displays the date and time when the event failed. The displayed time is
local to the process server, and the value is formatted for the current locale.
This field displays the deployment target for the event. Its value includes the name
of the target node, server, and cluster (if applicable).
Exception text:
This field displays the text of the exception that was generated when the event
failed.
Specify the amount of time that can elapse before a failed event expires and can no
longer be resubmitted. The displayed time is local to the process server.
You can edit this field to specify a new expiration time for the failed event. The
value for this field must be a date and time formatted for your locale. A
locale-appropriate example is provided.
Trace control:
If you set the tracing on the failed event, the Trace control field displays that
value. Otherwise, it displays the suggested default value of
SCA.LOG.INFO;COMP.LOG.INFO, which specifies that no trace occurs when the session
calls an Advanced Integration service or executes a component.
You can edit this field to assign a different trace level for the failed event. Tracing
can be set for a service or a component, and the results can be sent to a log or the
Common Event Infrastructure (CEI) server. For detailed information about setting
and viewing trace, see the Monitoring topics in the Business Process Management
Information Center.
This field displays the setting of the event sequencing qualifier for the failed event.
This field displays the setting of the failed events that caused store to be started.
Use the failed event detail page to view data for a single failed Java Message
Service (JMS) event and to delete or resubmit the event.
To access this page in the console, click Integration Applications > Failed Event
Manager, perform a search for failed JMS events, and click the name of a specific
event listed in the Search Results page.
In addition to viewing a failed event's data, you can perform the following tasks
from this page:
v Resubmit the failed event.
v Delete the failed event.
Event ID:
Specifies the unique ID for the failed event. This ID persists even after the event is
resubmitted; if the resubmission fails, the event is returned to the failed event
manager with the same event ID.
The event ID is assigned automatically by the Recovery subsystem; you cannot edit
it.
Event type:
Specifies the type of event that failed. The value is JMS for all failed JMS events.
The event type is assigned automatically by the Recovery subsystem; you cannot
edit it.
Event status:
Specifies the status of the failed event. For JMS events, the only available status
type is Failed.
The event status is assigned automatically by the Recovery subsystem; you cannot
edit it.
Interaction type:
Specifies the type of service invocation between components. The three supported
invocation models are asynchronous request/deferred response, asynchronous
request with callback, and asynchronous one-way.
Module:
Displays the name of the destination module for the event (where the event was
going when it failed).
Component:
Displays the name of the destination component for the event (where the event
was going when it failed).
Operation:
Failure time:
Displays the date and time the event failed. The time shown is local to the process
server, and the value is formatted for the current locale.
Displays the deployment target for the event. Its value includes the name of the
target node, server, and cluster (if applicable).
Resubmit destination:
Specifies the Java Naming and Directory Interface (JNDI) name of the original
destination, for resubmission purposes.
Correlation ID:
Redelivered count:
Displays the number of times the message has been redelivered to the client.
Displays the delivery method used by JMS. Valid values are PERSISTENT (messages
persist on the destination) and NONPERSISTENT (messages are removed from the
destination).
Specifies the amount of time that can elapse before a failed event expires and can
no longer be resubmitted. The time shown is local to the process server.
If a user specifies an expiration with the asynchronous call that sent the event, that
expiration data persists even if the event fails, and the expiration time appears in
the Resubmit expiration time field.
Displays the JMS message priority for the queue destination. The value is a
positive integer between zero and nine, with zero indicating the lowest priority.
JMS redelivered:
Specifies whether the message has been previously delivered to the client. Valid
values for this field are true and false.
Displays the destination to which replies are sent for request-response or two-way
operations.
JMS type:
Interaction type:
JMS destination:
Exception text:
Displays the text of the exception that was generated when the event failed.
Use the failed event detail page to view data for a single failed WebSphere MQ
event and to delete or resubmit the event.
To access this page in the console, click Integration Applications > Failed Event
Manager , perform a search for failed WebSphere MQ events, and click the name
of a specific event listed in the Search Results page.
The failed event details page provides the WebSphere MQ header property
information associated with the failed event, as well as details about the event's
destination, time of failure, and cause of failure.
518 IBM WebSphere ESB: Reference
In addition to viewing a failed event's data, you can perform the following tasks
from this page:
v Resubmit the failed event.
v Delete the failed event.
Event ID:
Specifies the unique ID for the failed event. This ID persists even after the event is
resubmitted; if the resubmission fails, the event is returned to the failed event
manager with the same event ID.
The event ID is assigned automatically by the Recovery subsystem; you cannot edit
it.
Event type:
Specifies the type of event that failed. The value is MQ for all failed WebSphere MQ
events.
The event type is assigned automatically by the Recovery subsystem; you cannot
edit it.
Event status:
Specifies the status of the failed event. For WebSphere MQ events, the only
available status type is Failed.
The event status is assigned automatically by the Recovery subsystem; you cannot
edit it.
Interaction type:
Specifies the type of service invocation between components. The three supported
invocation models are asynchronous request/deferred response, asynchronous
request with callback, and asynchronous one-way.
Module:
Displays the name of the destination module for the event (where the event was
going when it failed).
Component:
Displays the name of the destination component for the event (where the event
was going when it failed).
Operation:
Failure time:
Displays the date and time the event failed. The time shown is local to the process
server, and the value is formatted for the current locale.
Displays the deployment target for the event. Its value includes the name of the
target node, server, and cluster (if applicable).
Resubmit destination:
Specifies the Java Naming and Directory Interface (JNDI) name of the original
destination, for resubmission purposes.
Correlation ID:
Redelivered count:
Displays the number of times the message has been redelivered to the client.
Displays the delivery method used by WebSphere MQ. Valid values are PERSISTENT
(messages persist on the destination) and NONPERSISTENT (messages are removed
from the destination).
Specifies the amount of time that can elapse before a failed event expires and can
no longer be resubmitted. The time shown is local to the process server.
If a user specifies an expiration with the asynchronous call that sent the event, that
expiration data persists even if the event fails, and the expiration time appears in
the Resubmit expiration time field.
Message priority:
WebSphere MQ redelivered:
Specifies whether the message has been previously delivered to the client. Valid
values for this field are true and false.
Displays the queue to which replies are sent for request-response or two-way
operations.
Displays the queue manager to which replies are sent for request-response or
two-way operations.
Message type:
Interaction type:
WebSphere MQ destination:
Format:
Exception text:
Displays the text of the exception that was generated when the event failed.
To view this page in the administrative console, click Integration Applications >
Failed Event Manager > Search failed events.
The page contains fields that are common to all event types, as well as fields that
are specific to each type of event that is handled by the Recovery subsystem. These
type-specific fields display only when you search for the related event type. The
following sections describe all the fields available on the page.
Event type:
Specify the type or types of failed events that you want to find.
Event status:
Specify the event status that you want to include in your search. This field is
available only when you are searching for Business Process Choreographer events.
If you are searching for SCA, JMS, WebSphere MQ, or Business Flow Manager
hold queue events only, the value defaults to failed and the field is unavailable.
Module:
Specify the failed event destination module (the module to which the event is
sent). This field is only available when you search for SCA, JMS, WebSphere MQ,
Business Flow Manager hold queue, and Business Process Choreographer events.
The destination is determined by the failed event manager from the perspective of
the failure point. To clarify how the destination is determined, in the following
example, Component A asynchronously invokes Component B. The request
message is sent from A to B, and the response message is sent from B to A.
The Module field accepts the asterisk (*) wildcard character. Values are
case-sensitive. If you leave this field blank, it is treated as a wildcard and all
destination modules are returned.
Component:
Specify the failed event destination component (the component to which the event
is sent). This field is available when you search for all event types.
The destination is determined from the perspective of the failure point. (See the
description for the Module field for more information on how the destination is
determined.)
The Component field accepts the asterisk (*) wildcard character. Values are
case-sensitive. If you leave this field blank, it is treated as a wildcard and all
destination components are returned.
Operation:
Specify the failed event's operation (the method designated to process the event).
This field is available when searching for all event types.
The field accepts the asterisk (*) wildcard character. Values are case-sensitive. If
you leave this field blank, it is treated as a wildcard and all destination methods
are returned.
From date:
To search for events that failed during a particular time period, specify the starting
date and time. This field is available when you search for all event types.
The value for this field must conform to the time and date format that is required
by your computer locale. For example, the required format for the en_US locale is
MM/DD/YY HH:MM Meridiem; a correctly formatted value for the en_US locale
is12/20/2005 4:30 PM. The page contains an example of the appropriate format for
your locale.
Note that the time is always local to the process server, not an individual machine
running the administrative console.
To date:
Specify the ending date and time when searching for events that failed during a
particular time period. This field is available when searching for all event types.
The value for this field must conform to the time and date format required by your
computer's locale. (For instance, the required format for the en_US locale is
MM/DD/YY HH:MM Meridiem; a correctly formatted value looks like 12/20/2005
8:30 PM.) The page contains an example of the appropriate format for your locale.
Session ID:
Specify the ID for the session in which you want to search. This field is available
when you search for SCA events.
Every event executes in a session; the session includes all of the information that is
needed to process an event. If an event fails, the failed event manager encapsulates
specific session information for the failed execution branch in the Session ID
parameter.
Source module:
Specify the module from which an event originates. This field is available when
you search for SCA events.
The source is determined from the perspective of the failure point. To help clarify
how the source is determined, consider the following example, where Component
A is asynchronously invoking Component B. The request message is sent from A to
B, and the response message is sent from B to A.
v If the exception occurs during the initial request, Component A is the source and
Component B is the destination for the purposes of the failed event manager.
v If the exception occurs during the response, Component B is the source and
Component A is the destination for the purposes of the failed event manager.
The field accepts the asterisk (*) wildcard character. Values are case-sensitive. If
you leave this field blank, it is treated as a wildcard and all source modules are
returned.
Source component:
Specify the component from which an event originated. This field is available
when searching for SCA events.
The source is determined from the perspective of the failure point. See the
description for the Source module field for more information on how the source is
determined.
The Source component field accepts the asterisk (*) wildcard character. Values are
case-sensitive. If you leave this field blank, it is treated as a wildcard and all source
components are returned.
To search for events that contain a specific business object type, specify the type.
This field is available when you search for SCA events.
The Business object type field accepts the asterisk (*) wildcard character. Values are
case-sensitive. If you leave this field blank, it is treated as a wildcard and all events
are returned.
You can search for events that caused store to be started. This field is available
when you search for SCA events.
Exception text:
To search for exception text in failed events, specify the text. This field is available
when you search for SCA events.
You can specify all of the text that appears in the exception, or specify a fragment
of it the asterisk (*) wildcard character. Values are case-sensitive. If you leave this
field blank, it is treated as a wildcard and all events are returned.
Use the Business Data Editor collection page to view the business data parameters
associated with a failed SCA or Business Process Choreographer event. For SCA
events, you can also use this page to select a parameter to edit.
To access this page in the console, click Integration Applications > Failed Event
Manager > Get all failed events > failed_event > Edit business data.
Failed events typically include business data. This data can be encapsulated into a
business object, or it can be simple data that is not part of a business object. The
business data is organized into a hierarchy. At the top of the hierarchy are the
business objects and any simple data. When first opened, the Business Data Editor
collection page displays this top-level data.
Each parameter name in the hierarchy is a link. If the parameter is a simple data
type, clicking its name opens the Business Data Editor page so you can edit the
parameter value. If the parameter is a business object or a complex data type,
clicking its name expands the hierarchy further.
As you navigate through the levels of data, the business data hierarchy is updated
to reflect your current location within the hierarchy. Each level in the hierarchy is a
link, making it simple to move back and forth.
Use the Business Data Editor page to edit a simple parameter in a failed event’s
business data. This page displays the name and type of the parameter, as well as
any current value it has.
To access this page in the console, click Integration Applications > Failed Event
Manager > Get all failed events > failed_event > Edit business data >
business_data_parameter.
Note that any changes you make to the parameter’s value are only saved locally.
You must resubmit the event from the Business Data Editor collection page to
make the changes at a server level.
Parameter name:
Specifies the name of the parameter. If the parameter does not have a name, the
field lists the index value instead.
Parameter type:
Parameter value:
Specify the value for the business data parameter in this field.
Use this field to edit the existing value for the parameter. When editing a
parameter value, ensure that the new value is valid for the parameter data type.
Use the Resubmit with Trace page to resubmit one or more failed events with
tracing enabled. By using trace for a specific session, you can monitor the
resubmission of the failed event to determine whether it has successfully
completed.
To access this page in the console, click Integration Applications > Failed Event
Manager > Get all failed events, select a failed event by clicking its check box,
and then click Resubmit with trace.
Trace control:
Specify the level of trace to set for resubmitted failed events in this field.
You can modify the trace setting values, enable tracing for either an Advanced
Integration service or a component, and have it sent to a log or the Common Event
Infrastructure (CEI) server. See the monitoring topics in the Business Process
Management Information Center for detailed trace information.
Use the Delete Failed Events page to see all of the failed events marked for
deletion and to complete the deletion process. If you are deleting a Business
Process Choreographer failed event that has an associated BPEL process instance,
the Delete Failed Events page indicates the name of that instance and deletes it
along with the event.
If the number of events to delete is smaller than the maximum number of rows
you set in the console preferences, the Delete Failed Events page lists each event in
a table, along with the following information.
v Event ID: The unique ID for the event.
v Event type: The type of failed event (SCA, JMS, Business Flow Manager hold
queue, or Business Process Choreographer).
If the number of events is greater than the maximum number of rows, the Delete
Failed Events page simply lists the number of failed events marked for deletion
instead of listing each event and its details.
In addition, the Delete Failed Events page provides a summary of the number of
events you are deleting. If you are using Business Process Manager, the page lists
any Business Process Choreographer process instances that will be deleted with the
failed events.
Button Function
Delete Deletes the failed events and any associated BPEL
process instances.
Cancel Cancels the deletion and returns you to the Search
Results page.
Use this page to select the server on which to configure Cross-Component Trace.
When you select a server, you go to a configuration detail page where you can set
the Cross-Component Trace configuration parameters.
The table on the Cross-Component Trace page lists each server on which to enable
or disable tracing. It has the following columns:
Server Specifies the name of the server.
To configure Cross-Component Trace for a particular server, click the server
name. This will bring you to the Cross-Component Trace configuration
page for the server that you selected.
Node Specifies the node on which the server is running.
Host Name
Specifies the host name of the server.
Version
Specifies the version of the server. The Cross-Component Trace
configuration page that you go to by selecting the server name varies
depending on the version of the server.
Type Specifies the server type.
Use this page to manage and configure Cross-Component Trace disk space use.
From this page you can set a disk use threshold as the means for automatically
deleting Cross-Component Trace data snapshot files. You can also select Delete
data snapshot files to remove data snapshot files immediately from the disk.
In use:
Specify in MB the allowed disk space use for data snapshot files on the selected
server.
The value you enter represents a threshold. When the threshold is exceeded, the
system automatically deletes data snapshot files from the disk. The files are deleted
in a sequential fashion, from oldest to most recent, until the system achieves the
disk space required to write the next data snapshot, while remaining below the
disk use threshold.
There is a minimum level of 50 MB. Settings below this minimum are converted to
the minimum. Setting of 0 or less than 0 (-1, -2...) results in 0 and turns off the
automatic delete feature.
Use this page to view and set the Cross-Component Trace parameters for V7.5.1
servers. You can set the parameters for a server or for a specific module. If you
want to perform Cross-Component Trace on specific modules on the server, you
can access a list of SCA modules from this page. The parameters that you set on
the Configuration tab apply when the server is restarted. The parameters that you
set on the Runtime tab apply immediately.
Use this page to enable or disable a Cross-Component Trace (for the server or
specific modules), specify where trace data is stored, and choose how to configure
Cross-Component Trace logging.
Runtime
Runtime parameters for Cross-Component Trace display on the Runtime tab. The
parameters you set on the Runtime tab are applied to the server immediately.
Server settings
Trace all
Select this option to turn on Cross-Component Trace for the creation of call
chain information for all SCA modules in the server. Even with Trace All
selected, you can add additional SCA modules to the table of modules under
Enable tracing for the selected Service Component Architecture (SCA)
modules.
When you select Trace all, the server honors Cross-Component Trace call
chains for modules coming from (inbound) other servers. When Trace all is
selected, the server also checks if Cross-Component Trace is on for any
modules in the server and honors those settings so that calls to those modules
result in application-specific Cross-Component call chains.
Enable data snapshot on this server
Select this option to enable the data snapshot feature of Cross-Component
Trace.
When data snapshot is enabled, the system captures data sent in and passed
between SCA components. This extra data (about what was passed between
SCA components) can be large and is kept in separate files and not in the
trace.log or systemout.log.
You can delete data snapshot files from the administrative console. For
information on managing Cross-Component Trace disk use, select
Cross-Component Trace disk use from the Additional properties section of this
console page.
Module Settings
Enable tracing for the selected Service Component Architecture (SCA)
modules
This table provides a list of modules for which Cross-Component Trace has
been enabled.
Try to keep the list of SCA modules in Enable tracing for the selected Service
Component Architecture (SCA) modules to a small number. If the number of
modules listed in the table begins to grow, consider selecting Trace all instead.
There is a small affect on performance for each SCA module added to the list.
This table has multiple functions as follows:
v Add SCA modules
Choosing Add brings you to a page listing all of the SCA modules running
on the server. From the list of SCA modules, you can choose those on which
want to enable Cross-Component Trace.
v Remove
Choosing Remove removes the SCA module you have selected from the list.
By selecting a module and clicking Remove, you disable Cross-Component
Trace functionality on that module.
Select
Used as an indicator to remove the selected Service Component
Architecture (SCA) modules from list of modules on which
Cross-Component Trace is enabled.
Module Name
The names of the Service Component Architecture (SCA) modules on
which Cross-Component Trace is enabled.
Version
The version of the Service Component Architecture (SCA) modules
running on the selected server.
Cell Identifier
The identifier of the cell on which the SCA module is deployed.
Enable Data Snapshot
Indicates whether or not the data snapshot feature is enable for
Cross-Component Trace on that module.
When data snapshot is enabled, the system captures data sent in and
passed between SCA components. This extra data (about what was passed
between SCA components) can be large and is kept in separate files and
not in the trace.log or systemout.log.
Trace output
Enable Cross-Component Trace
Selecting Enable Cross-Component Trace prepares the server for the following:
v Cross-Component Trace for inbound application-specific call chains
v Enabling cross-component trace on any module selected under Enable
tracing for the selected Service Component Architecture (SCA) modules.
Enable Cross-Component Trace in the Configuration collects data when the
server starts or restarts.
Clearing or not selecting Enable Cross-Component Trace disables
Cross-Component Trace functionality for the server.
Save Cross-Component Trace output to
Choose which file will hold the data gathered by the Cross-Component Trace
operations performed on the server. Options presented are as follows:
v Trace
Trace is the default and recommended option. Selecting Trace maps to the
WebSphere Application Server logging level of Fine and provides the best
performance for collecting Cross-Component Trace data. If you select this
option, it results in trace data being written to the trace.log file.
v System.Out
Selecting System.Out maps to the WebSphere Application Server logging
level of Info. Selecting System.Out results in Cross-Component Trace data
being written to the systemout.log file.
Choosing System.Out takes the system more time to write out the log.
However, the SystemOut.log has less data in it, making it less time
consuming to review it's contents.
Server settings
Trace all
Select this option to turn on Cross-Component Trace for the creation of call
chain information for all SCA modules in the server. Even with Trace All
selected, you can add additional SCA modules to the table of modules under
Enable tracing for the selected Service Component Architecture (SCA)
modules.
When you select Trace all, the server honors Cross-Component Trace call
chains for modules coming from (inbound) other servers. When Trace all is
selected, the server also checks if Cross-Component Trace is on for any
modules in the server and honors those settings so that calls to those modules
result in application-specific Cross-Component call chains.
Module Settings
Enable tracing for the selected Service Component Architecture (SCA)
modules
This table provides a list of modules for which Cross-Component Trace has
been enabled.
Try to keep the list of SCA modules in Enable tracing for the selected Service
Component Architecture (SCA) modules to a small number. If the number of
modules listed in the table begins to grow, consider selecting Trace all instead.
There is a small affect on performance for each SCA module added to the list.
This table has multiple functions as follows:
v Add SCA modules
Choosing Add brings you to a page listing all of the SCA modules running
on the server. From the list of SCA modules, you can choose those on which
want to enable Cross-Component Trace.
v Remove
Choosing Remove removes the SCA module you have selected from the list.
By selecting a module and clicking Remove, you disable Cross-Component
Trace functionality on that module.
Select
Used as an indicator to remove the selected Service Component
Architecture (SCA) modules from list of modules on which
Cross-Component Trace is enabled.
Module Name
The names of the Service Component Architecture (SCA) modules on
which Cross-Component Trace is enabled.
Version
The version of the Service Component Architecture (SCA) modules
running on the selected server.
Cell Identifier
The identifier of the cell on which the SCA module is deployed.
Enable Data Snapshot
Indicates whether or not the data snapshot feature is enable for
Cross-Component Trace on that module.
Use this page to view and set the Cross-Component Trace parameters for V7.0.0.x
servers. The parameters that you set on the Configuration tab apply when the
server is restarted. The parameters that you set on the Runtime tab apply
immediately.
Runtime
Runtime parameters for Cross-Component Trace display on the Runtime tab. The
parameters you set on the Runtime tab are applied to the server immediately.
Save all runtime changes to the server configuration file
Select this option if you want to apply the changes made for runtime (which
are applied by the system immediately) to the configuration.
If you select Save all runtime changes to the server configuration file the
changes you make are applied by the system when the server starts or restarts.
Enable Cross-Component Trace
Selecting Enable Cross-Component Trace turns Cross-Component Trace on for
the server.
Clearing or not selecting Enable Cross-Component Trace disables
Cross-Component Trace functionality for the server.
Enable data snapshot on this server
Select this option to enable the data snapshot feature of Cross-Component
Trace. Selecting Enable data snapshot on this server results in data being
collected when the server starts or restarts.
Configuration
Use this page to add modules to the list of modules on which to enable
Cross-Component Trace.
The display lists the SCA modules on the server and contains the following
columns:
Select
Selects Service Component Architecture (SCA) modules on which you want to
run Cross-Component Trace.
The modules that you select will be added to the list of modules under Enable
tracing for the selected Service Component Architecture (SCA) modules on
the Cross-Component Trace page for the server.
Module Name
The names of the Service Component Architecture (SCA) modules running on
the selected server.
Version
The version of the Service Component Architecture (SCA) modules running on
the selected server.
Cell Identifier
The identifier of the cell on which the SCA module is deployed.
Deployment environments
A deployment environment is a collection of configured clusters, servers, and
middleware that collaborate to provide an environment to host software modules.
For example, a deployment environment might include a host for message
destinations, a processor or sorter of business events, and administrative programs.
Deployment Environments:
Use this page to display, manage, change, import, or export the defined
deployment environments.
Security role required: Your userid must be associated with either Administrator
or Configurator role to use this page.
This page displays the list of deployment environments defined in the cell. From
this page you can:
v Start or stop configured deployment environments
v Edit deployment environment configurations
v Create a new deployment environment
v Remove a deployment environment
v Import or export a deployment environment design
A deployment environment design is an external document that describes and
defines the specific component, cluster/node/server configuration, resources and
related configuration parameters that make up a deployment environment. A
deployment environment design is an instance of a deployment environment
configuration. Deployment environment designs are sometimes referred to as
deployment environment definitions.
Start:
Stop:
New:
Remove:
Import:
Export:
Exports the selected deployment environments to a set of files that you can import
later. You can import these files into this cell or another cell. When you select a
single deployment environment and click Export, the output is a single
deployment environment design file. When you select multiple deployment
environments and click Export, the output is a compressed file that contains the
exported design files.
Deferred Configuration:
Use this page to display the steps that cannot be completed using the
administrative console but are required to complete the configuration of the
deployment environment. The steps include how to run the necessary scripts and
the location of those scripts.
Security role required: Your userid must be associated with either Administrator
or Configurator role to use this page.
The configuration has been performed by admin_user on July 20, 2007 9:00:00
AM PDT.
Close closes the page and returns you to the previous console page.
Deployment Topology:
Use this page to add nodes to deployment environments and assign nodes to the
functional areas within a deployment environment based on an IBM-supplied
pattern. You can also remove nodes from the deployment environment definition.
The panel enforces any functional constraints you define to prevent configuration
errors caused by insufficient resources for a specific function.
Security role required: Your userid must be associated with either Administrator
or Configurator role to use this page.
Use this page to view a deployment environment. Depending on the pattern, you
can also access Deployment Topology, Deferred Configuration, Data Sources, and
Authentication Aliases pages from this page.
Security role required: Your userid must be associated with either Administrator
or Configurator role to use this page.
Important: After you have configured the deployment environment, this page
displays in read mode and you cannot change any of the field values.
Deployment Environment:
Specifies the name of the deployment environment. The system uses this name in a
pattern applied to cluster names generated from this deployment environment
definition so you can associate clusters with their associated deployment
environments.
Note: You will not be able to change the name after you have generated the
deployment environment.
Property Value
Data type String
Value The value you specified when you created
this deployment environment.
Specifies the deployment environment pattern used when creating the deployment
environment.
Property Value
Data type String
Value The value you specified when you created
this deployment environment.
Property Value
Data type String
Default None
Deployment Topology:
Lists the purpose of each cluster or server in the deployment environment. This
status is available for IBM-supplied deployment environment patterns and reflects
the status of the resources that make up that function.
Note: This section does not display for a custom topology instance.
Cluster
Specifies the function of the cluster in the deployment environment. The values
could be Application Deployment Target, Messaging Infrastructure, Supporting
Infrastructure or Web Application Infrastructure.
Cluster Name
The name used to identify this cluster in the deployment environment
definition. The system derives the name from the name of the deployment
environment.
Status
The current status of each function. The icons are explained in “Deployment
environment functional status”.
Use this page as the first step to configure your deployment environment. The
deployment environment includes the application deployment target for your
applications and all other clusters needed to support the application deployment
target.
This is the first page of the wizard that guides you through the process of creating
your deployment environment. Use this page to either create a deployment
environment based on one of the IBM-supplied patterns, create a custom
deployment environment or to create a deployment environment by importing a
deployment environment design document.
Specifies the name used to identify the deployment environment you are creating.
When you select Create a deployment environment based on an imported design,
the system makes this field unavailable and imports the deployment environment
name from the deployment environment design document. If a design exists with
the same name, you receive a warning and are prompted to supply a unique name
for the import to complete.
Property Value
Data type String
Default None
Range Any string identifier for the deployment
environment. This system may apply the
name you enter to cluster names and cluster
member names in the deployment
environment. Use a short and concise name
to identify the created clusters and to ensure
that the named resources are not becoming
too long.
Specifies that you do not want the wizard to display configuration steps have
default values. Selecting this displays only the configuration steps that do not have
default values.
Property Value
Data type Binary
Default Checked
Range Checked or unchecked
Specifies that you want the wizard to display all of the configuration steps, even
those steps that have default values.
Property Value
Data type Binary
Default Unchecked
Use this page to select the feature for the deployment environment. Features
represent the runtime processing capabilities of your deployment environment. The
features that display on this page are based on the configuration of the deployment
manager profile.
If your deployment manager profile has been augmented to support other IBM
BPM products (features) besides WebSphere ESB, then the Deployment
Environment Features page will list these features as well.
Use this page to select available features that are compatible to the deployment
environment feature. The list of compatible features depends on deployment
environment feature that you selected from the Deployment Environment Features
page.
Note: The Select compatible deployment environment features page displays only
if the deployment manager has been augmented with other business process
management (BPM) features.
The features listed on this page are based on rules of feature coexistence as
determined by the primary feature for which you are creating the deployment
environment.
The runtime processing capabilities of the feature that you select from this page
will be part of the deployment environment.
Use this page to select a pattern that provides the topological characteristics of the
deployment environment. This page contains descriptions of each of the supported
patterns.
The list of patterns that display on the Deployment Environment Patterns page is
dynamic. This list is activated by, and dependent on, the following environment
conditions and configuration decisions:
v The platform on which you have installed the software.
v The selections that you have made on the Select the deployment environment
feature and the Select compatible deployment environment features pages.
On this page you must select one of the deployment environment patterns. Follow
these tips regarding IBM-supplied topologies.
v For an IBM Process Server deployment environment, these topologies work best:
– Remote Messaging, Remote Support, and Web - Four-cluster topology pattern
– Remote Messaging and Remote Support - Three-cluster topology pattern
v For an IBM Process Center deployment environment, these topologies work best:
– Single Cluster topology pattern
– Remote Messaging - Two-cluster topology pattern
Single Cluster
This pattern combines the application deployment target, the messaging
support and additional support functions into a single cluster.
This is the default pattern for a z/OS installation.
Supported feature combinations for this pattern include the following:
v WebSphere ESB / IBM Business Process Manager
v IBM Business Monitor
v IBM Business Monitor + WebSphere ESB / IBM Business Process Manager
Remote Messaging
This pattern, which consists of 2 clusters, separates the cluster providing the
messaging support from the cluster providing the application deployment
target and additional support functions.
This is not a default pattern.
Supported feature combinations for this pattern include the following:
v WebSphere ESB / IBM Business Process Manager
Chapter 8. User interfaces 543
Note: IBM Business Monitor does not support this pattern.
Remote Messaging and Remote Support
This pattern, which consists of 3 clusters, separates the application deployment
target cluster from the cluster providing the messaging infrastructure and
support and the cluster providing additional support functions.
This is the default pattern for IBM Business Process Manager Advanced
(including WebSphere ESB) and IBM Business Process Manager Standard.
Supported feature combinations for this pattern include the following:
v WebSphere ESB, IBM Business Process Manager
Cluster Naming:
Use the Cluster Naming page to customize the names of clusters or cluster
members.
The Cluster Naming page applies to the cluster type (for example, Application
Deployment Target, Messaging Infrastructure, Supporting Infrastructure) that you
are configuring for the deployment environment.
Note: This cluster type name applies for BPM configurations in which IBM
Business Monitor is the primary feature / product.
The table lists the cluster members that are part of the cluster you are configuring.
The number of cluster member names that display in the table match the number
of cluster members that you entered for the cluster type column and node row on
the Clusters page.
Select Nodes:
Use this page to select the nodes for the deployment environment. When needed,
you can add more nodes to the deployment environment through the
administrative console or by using the addNodetoDeploymentEnvDef command. The
page displays the nodes currently federated to the deployment manager.
To add nodes to the deployment environment, click Select on the row that contains
the node to include. The minimum number of nodes required for the topology is
indicated in Number of nodes needed.
Cluster Members:
Use this page to specify the number of cluster members from each node to assign
to specific deployment environment functions.
The table displays the nodes you included in the deployment environment and
allows you to specify the layout of cluster members within the deployment
environment.
If you do not want a node to include a specific cluster function, you can set the
value for that function to zero 0. However, the default value is 1.
Each deployment environment pattern defines clusters types that provide specific
functions in the environment. Based on the pattern, you define the following
functions:
Application Deployment Target
Consists of a cluster to which user applications need to be deployed.
Depending on the chosen deployment environment pattern, the application
deployment target cluster may also assume the functionality of the messaging
and the supporting infrastructure clusters.
Messaging Infrastructure
Consists of a cluster where the bus members are located.
Supporting Infrastructure
Consists of a cluster that hosts the Common Event Infrastructure server and
other infrastructure services that are used to manage your system.
Web Applications
Consists of a cluster that hosts the web-based components of the deployment
environment.
Sub-steps for customizing the cluster name and cluster member name of each
cluster type display on separate pages.
REST Services:
Use this page to configure services for Representational State Transfer (REST)
application programming interfaces (APIs). If you want REST APIs available
during run time, you must configure the REST services. If you want widgets to be
available in Business Space, you must configure the REST services for those
widgets.
For servers, select: Servers > Server Types > WebSphere application servers >
name_of_server > Business Integration > REST Services.
For clusters, select: Servers > Clusters > WebSphere application server clusters >
name_of_cluster > Business Integration > REST Services.
Protocol:
Select either http:// or https://, which is used to form the endpoint address for all
the REST services on this page. Configure a full URL path by selecting http:// or
https:// and then setting the host and port. In an environment with a load balancer
or proxy server between the browser and the Business Space and REST services,
make sure that what you designate here matches the browser URL for accessing
Business Space.
Type the host or virtual host that a client needs to communicate with the cluster.
Use a fully qualified host name. In an environment with a load balancer or proxy
server between the browser and the Business Space and REST services, make sure
that what you designate here matches the browser URL for accessing Business
Space. If you are using the Deployment Environment Configuration wizard, and
you leave the host and port fields empty, the values default to values of an
individual cluster member host and its HTTP port. For a load-balanced
environment, you must later change the default values to the virtual host name
and port of your environment.
Port:
Type the port for all REST services that you want to configure. For a load-balanced
environment, designate the port number that the client needs to communicate with
the cluster. In an environment with a load balancer or proxy server between the
browser and the Business Space and REST services, make sure that what you
designate here matches the browser URL for accessing Business Space. If you are
Context root:
This field lists the context root for the REST service provider. This field is
read-only.
Use this page to import a database configuration document that defines the
database metadata definitions for the deployment environment. Importing a
database configuration is optional.
The design document can be based on a database design that you created using
the database design tool (DDT), or it can be the supplied design document based
on the pattern and feature that you have selected. If you do not have a database
design document, you can set the database parameters on the next step (Database)
of the Deployment Environment Configuration wizard. You may also correct any
database parameters of an imported database design document if needed.
The database design document defines the database metadata definitions for the
selected deployment environment features.
There can be database configuration documents for each feature of the deployment
environment. If there are multiple features supported by the deployment manager,
and if there are database design documents for each of the features, you can use
this page to import both database design documents.
Note: If your deployment manager supports multiple features, and if there are
tables in the imported database configuration documents that have common
definitions (for example, IBM Business Process Manager and IBM Business Monitor
can both have a table definition for Business Space), the system would use the
Business Space database definition of database configuration document for feature,
as opposed to the compatible feature. So, for database design documents that
include common tables, the table associated with the feature takes precedent over
the table associated with the compatible feature.
Note: The database design document that you import for the deployment
environment does not change the commonDB created at Profile Creation time.
Database:
Use this page to configure the data sources supporting the deployment
environment and to optionally create the tables in the database that are a part of
the deployment environment. The wizard defines the names of the data sources
and you cannot change the names.
Security:
Use this page to configure the authentication aliases WebSphere uses when
accessing secure components. The authentication alias user name and password
can be changed on this page. These aliases are used to access secure components
but do not provide access to data sources.
Use this page to verify or change the context roots for the various components.
This page displays the current values of the context roots for the web applications
that are part of the deployment environment you are creating.
Context root:
The context root is used to access the instances of the web applications that
manage the BPEL processes.
For Business Rules Manager, this specifies the context root for the instance of the
business rules manager that runs on this cluster or server.
For Business Space, this specifies the context root for the instance of the business
space manager that runs on this cluster or server. The context root for Business
Space is read-only.
Property Value
Data type String
Description:
The free form textual description of the web application context root for the
component.
Property Value
Data type String
Default None
Summary:
Use this page to verify the values you are defining for this deployment
environment.
Finish
Saves the deployment environment definition.
Finish and Generate Environment
Saves the deployment environment definition and configures the resources that
make up the deployment environment.
Use this page to verify that you are deleting the correct deployment environment.
The page shows a table for each deployment environment that you selected for
deletion. The table is showing the deployment environment name and a table of
resources associated with the deployment environment.
Deployment environment:
Property Value
Data type String
Default None
Resource:
Property Value
Data type String
Default None
Type:
Property Value
Data type String
Range Cluster | Server
Default None
Use this page to verify or change the context roots for the various components.
This page displays the current values of the context roots for:
v Business rules manager
Context Root
The current value of the context root for the component.
Change the context roots by typing over the value in the Context Root field.
Context root:
Property Value
Data type String
Default Dependent on the component.
v /br for business rules manager
The remote artifact loader server hosts artifacts installed on the server, making
them available to remote artifact loader clients in the same or other cells. The
remote artifact loader client can query or load artifacts from remote artifact loader
servers.
When you configure the client to work with a server in a different cell, a host
name and port number must be provided. This host name and port number
information can correspond to either the deployment manager of the remote cell,
or any server in the remote cell.
The remote artifact loader server configuration holds settings that apply to all
remote artifact loader servers managed in this cell. You are able to secure all your
remote artifact loader servers with a user name and password. When the remote
artifact loader server security is enabled, any remote artifact client that wants to
query or load artifacts from any server in this cell must authenticate with the given
credentials. The https protocol is used if security is enabled.
Use this page to specify settings for all remote artifact loaders managed in this cell.
User name:
Specify the identity that each remote artifact client must provide in order to query
or load artifacts from other servers in the cell.
Table 77. User name
Property Value
Data type String
Password:
Specify the password associated with the identity that each remote artifact client
must provide in order to query or load artifacts from other servers in the cell.
Table 78. Password
Property Value
Data type String
The remote artifact loader client collection page lists the client proxies that are
configured to load artifacts. Use this page to add, remove or edit client proxy
configurations.
To view this administrative console page, click Resources > Remote Artifacts >
Client Configuration.
Client artifact loader proxy configurations are displayed in a table, along with the
following information for each configuration:
v Name - The name assigned to the configuration.
v Remote deployment manager - The remote deployment manager associated with
the client proxy configuration. The remote deployment manager is shown in the
format host name:host port. The remote deployment manager information is
applicable only if the remote artifact loader server and client reside in different
cells. When server and client exist in the same cell, communication does not
need to go through any deployment manager.
Button Function
Add Opens the Client Configuration settings page, allowing
you to enter detailed information about a new client
artifact loader proxy configuration.
Remove Removes one or more client artifact loader
configurations.
The remote artifact loader client settings page provides detailed information about
the new or existing client artifact loader configurations. Use this page to create
new client proxy configurations or edit existing client proxies.
Use the administrative commands on this page to manage the remote artifact
loader client configuration.
To view this administrative console page, click Resources > Remote Artifacts >
Client Configuration >client_configuration (or Resources > Remote Artifacts >
Client Configuration > Add).
Configuration name:
Identifies the particular client artifact loader configuration that is being edited.
Indicates whether the artifact loader security was enabled for the relevant artifact
loader server.
Property Value
Data type Check box
User name:
Specify the user name that corresponds with the identity provided when remote
artifact loader security was enabled on the artifact loader server.
Password:
Specify the password associated with the user name, provided when the remote
artifact loader security was enabled on the artifact loader server.
Host Name:
This field is used only for cross cell remote artifact loading. If the remote artifact
loader server you intend to use is not part of the client's cell, you must provide the
host name of the deployment manager of the remote cell.
Host Port:
The port number is that of the deployment manager on a remote cell, which
contains a remote artifact loader server that you want to access.
Type:
Property Value
Data type String
Default soap
Range soap, rmi
Specifies whether user credentials are supplied to the remote artifact loader server.
When using remote artifact loader servers on other cells, you must provide the
artifact loader security credentials that were supplied to the remote server.
Select this check box if the remote artifact loader on the other cell is using security.
Property Value
Data type Check box
User name:
Specify the user name provided to the remote artifact loader server on the remote
cell when artifact loader security was enabled.
Property Value
Data type String
Password:
Specify the password provided to the remote artifact loader server on the remote
cell when artifact loader security was enabled.
Property Value
Data type String
Relationship Service:
Use this page to view and set the configuration properties for the relationship
service.
To view this page in the administrative console, click Integration Applications >
Relationship Manager > Relationship Services configuration.
Required security role for this task: When security and role-based authorization
are enabled, you must be logged in as a configurator or an administrator to modify
the configuration properties. Any WebSphere security role can view this
configuration.
Specify the data source and page size properties for the relationship service. You
then have the following options:
v Click OK to save your changes and return to the previous page.
v Click Reset to clear your changes and restore the currently configured values or
most recently saved values.
v Click Cancel to discard any unsaved changes on the page and return to the
previous page.
Name: Specifies the name of the current relationship service. This field is
automatically set.
Query block size (relationship instance count): Specify the maximum cache that the
relationship service should set aside for relationship queries. This setting
determines the size of the query results set. By default, 5000 relationship instances
are read at once.
Data source: Specify the default data source for the relationship service by entering
the Java Naming and Directory Interface (JNDI) name of a data source defined at
the cell level. This is where the tables for relationship service are stored. Each
relationship-related schema is created in this data source by default.
Data source is on z/OS: Select the check box in this field if your data source is on a
machine running a z/OS operating system. After selecting the check box, the
following additional fields display on the page.
Database for table creation: Specify the name of the desired z/OS datasource.
Storage group: Specify the name of the storage group for the relationship services
database. A storage group is a list of Direct Access Storage Device (DASD) volumes
on which DB2 can allocate data sets for associated storage structures. DB2 storage
group names are unqualified identifiers of up to 128 characters (DB2 version 7 has
Tablespace for table creation: Specify the name for the table space. A table space is a
storage structure that can hold one or more base tables. The name, qualified with
the database-name implicitly or explicitly specified by the in clause, must not
identify a table space, index space, or large object (LOB) table space that exists at
the current server.
Binding options: Specify the options you want to use to bind your SQL stored
procedure package. The maximum length is 1024 characters.
Package collection ID: Identify the package collection to use when executing the
stored procedure. This is the package collection for binding the database request
module collection (DBRM) that is associated with the stored procedure.
Compiler options: Specify the options for compiling the C language program that
DB2 generates for the SQL procedure. The maximum length is 255 characters.
Precompiler options: Specify the options for precompiling the C language program
that DB2 generates for the SQL procedure. The maximum length is 255 characters.
Prelink options: Specify the options for prelink-editing of the C language program
that DB2 generates for the SQL procedure. The maximum length is 255 characters.
Link options: Specify the options for link-editing the C language program that DB2
generates for the SQL procedure. The maximum length is 254 characters.
Builder schema: Specify the schema name for the procedure processor entered in
the Builder parameter. Usually this is SYSPROC for version 5, and could be any
name in version 6.
Builder: Specify the procedure name attribute when calling the z/OS procedures
processor (usually DSNTPSMP). You can create several stored procedure
definitions for DSNTPSMP, each of which specifies a different WLM environment.
Calling DSNTPSMP using the name in this parameter, DB2 runs DSNTPSMP in the
WLM environment that is associated with the procedure name. The maximum
length is 18 characters.
Relationship collection:
Use this page to view a list of the existing relationships that this relationship
service manages.
Required security role for this task: When security and role-based authorization
are enabled, any WebSphere security role can view this configuration.
Each row shows the complete name of the relationship, the version, and the data
source for the associated relationship type. Select a relationship to see details about
it.
To customize the number of rows that display at one time, click Preferences.
Modify the Maximum rows field value and click Apply. The default is 25. The
total relationship count managed by this relationship service is displayed in the
Total field.
Relationship settings:
Use this page to view the configuration properties that the relationship service
manages at both the relationship service level–as it applies to the relationship
service–and at the individual relationship level–as it applies to individual
relationships.
To view this page in the administrative console, click Integration Applications >
Relationship Manager > Relationship Services configuration > Relationships >
relationshipname.
Required security role for this task: When security and role-based authorization
are enabled, any WebSphere security role can view this configuration.
Name: Specifies the name of the relationship. The field is automatically set.
Version: Specifies the version of the relationship. This attribute is automatically set
by the relationship service internally. It is used for migration purposes. If the old
relationship data needs to coexist in the new system, then the old infrastructure
version will be set to the old version. Otherwise, it will be set to the current
version.
Data source: Specifies the data source where the relationship instances are stored.
This field is automatically set.
Relationship manager
The relationship manager is a tool for manually controlling and manipulating
relationship data to correct errors found in automated relationship management or
provide more complete relationship information. In particular, it provides a facility
for retrieving as well as modifying relationship instance data.
You can use the relationship manager to manage entities at all levels: the
relationship instance, role instance, and attribute data and property data levels.
Relationships:
To view this page in the administrative console, click Integration Applications >
Relationship Manager > Relationships next to the relationship services MBean for
the server you want to manage.
Required security role for this task: When security and role-based authorization
are enabled, you must be logged in as a monitor, an operator, a configurator, or an
administrator to view and query relationships. However, only operators and
administrators are allowed to create and roll back data.
From this page you can perform the following actions on a selected relationship:
v To set search options on a selected relationship, select the radio button next to
the relationship name and click Query.
v To create a new relationship instance, select the radio button next to the
relationship name and click Create. Add the appropriate property values on the
New Relationship Instance page and click OK.
v To view more specific information for a selected relationship, click the
relationship name or select the radio button next to the relationship name and
click Details.
v To set date options for rolling back all instance data for a selected relationship,
select the radio button next to the relationship name and click Rollback.
Working with relationship instances: Use these pages to view relationship details,
make relationship queries, or perform relationship rollback:
v Relationship Detail
v Query Relationship
v Rollback Relationship
Relationship Detail:
Use this page to view detailed information for the relationship, including the
relationship name, display name, associated roles with their attributes, property
values, and static and identity attributes.
To view this page in the administrative console, click Integration Applications >
Relationship Manager and click the Relationships link associated with
relationship services MBean for the server you want to manage > relationshipname.
To return to the Relationships page, click Relationships from the path or click
Back.
Role schema information: Displays a tabular list of the roles in this relationship and
a brief summary of their attribute data, including the display name, object name,
and managed attribute setting. To view more information about a role, click the
role name.
Property values: Displays a shorthand tabular list of the user-defined properties for
the relationship. The columns show the name, type, and default values for each
property in the relationship.
Static: Indicates that the relationship is a static relationship if this check box is
selected.
Identity: Indicates that the relationship is an identity relationship if this check box
is selected.
Rollback Relationship:
Use this page to roll back relationship instance data from a specific time period.
To view this page in the administrative console, click Integration Applications >
Relationship Manager > Relationships next to the relationship services MBean for
the server you want to manage > select the radio button next to the relationship
name > Rollback.
Required security role for this task: When security and role-based authorization
are enabled, you must be logged in as an operator or an administrator to roll back
data.
Specify the start and end dates and times for rolling back the instance data for a
selected relationship. All instance data in the relationship later than the date and
time will be marked as deactivated. You have the following options:
v Click OK to make the rollback effective immediately and return to the
Relationships page.
v Click Cancel to clear your selections and return to the Relationships page.
Relationship name: Specifies the URL of this relationship. This field is read only.
From date: Type the starting date and time for rolling back the instance data for a
selected relationship.
To date: Type the ending date and time for rolling back the instance data for a
selected relationship.
Query Relationship:
Use this page to perform relationship-based instance queries. Each tab represents a
different query option.
Required security role for this task: When security and role-based authorization
are enabled, you must be logged in as a monitor, an operator, a configurator, or an
administrator to view this page and perform queries.
Select a query option (All, By ID, By property, or By role) and specify the search
criteria to get all or a subset of the instance data for a relationship. The return is
the result set of that query and is displayed in table format on the Relationship
Instances page. After you have specified the query parameters, you have the
following options:
v Click OK to display the result data from the query.
v Click Cancel to discard any changes made and return to the list of relationships.
All tab: Select the All tab to retrieve a list of all instances in the relationship. You
can select to display all activated, all inactivated, or all activated and inactivated
relationship instance data.
Relationship name: Specifies the URL of the relationship. This field is read-only.
Logical state: Select the radio button next to all activated, all inactivated, or all
activated and inactivated relationship instance data.
Relationship name: Specifies the URL of the relationship. The field is read-only.
Starting ID: Enter the first instance ID in the range of relationships you want to
retrieve.
Ending ID: Enter the last instance ID in the range of relationships you want to
retrieve.
Relationship name: Specifies the URL of the relationship. This field is read-only.
Property name and type: Enter the property name and type parameters for
retrieving relationship instances. Select an option from the drop-down list.
By role tab: Select the By role tab to retrieve relationship instances based on a role
name, key attribute value, date range during which the role was created or
modified, or specific property value.
Relationship name: Specifies the URL of the relationship. The field is read-only.
Role object type: Specifies the URL of the participating business object. This field is
read-only.
Key attributes: Specifies the primary key attribute of the business object
participating in the role. Enter any single character or string of characters
(case-sensitive, numbers included) in the Value field.
From date: Enter the starting date and time for the range of roles to search on.
To date: Enter the ending date and time for the range of roles to search on.
Property name and type: Enter the role property name and type parameters for
retrieving relationship instances. Select an option from the drop-down list.
Property value: Enter the value parameter for the specified role property.
Role Detail:
Use this page to view detailed information for the role, including the relationship
name, role name, display name, property values, keys, role object type, and
managed attribute setting.
To view this page in the administrative console, click Integration Applications >
Relationship Manager and click the Relationships link associated with the
relationship services MBean for the server you want to manage > relationshipname >
rolename.
To return to the Relationship Detail page, click Relationship Detail from the path
or click Back.
Required security role for this task: When security and role-based authorization
are enabled, you must be logged in as a monitor, an operator, a configurator, or an
administrator to view this page.
Relationship name: Specifies the URL of the relationship associated with the role.
Property values: Displays a shorthand tabular list of the user-defined properties for
the role. The columns show the name, type, and default values for each property.
Keys: Specifies the primary key attribute of the business object participating in the
role. A key can be either a unique key or a composite key, which consists of a
unique key from a parent business object and a non-unique key from a child
business object. The columns show the path information to the key attribute and
the display name ID.
Role object type: Specifies the URL of the participating business object.
Relationship Instances:
Use this page to view a list of the relationship instances that match the relationship
query and to edit or perform an action on a selected relationship instance.
To view this page in the administrative console, click Integration Applications >
Relationship Manager and click the Relationships link associated with the
relationship services MBean for the server you want to manage > select the radio
button next to the relationship name > Query. Enter the appropriate query
information and click OK.
To return to the Query Relationship page click Query Relationship from the path.
Required security role for this task: When security and role-based authorization
are enabled, you must be logged in as a monitor, an operator, a configurator, or an
administrator to view this page. However, only operators and administrators can
create and delete relationship instances.
The instances are displayed in table view, with each row representing one
relationship instance. Each relationship instance includes the relationship instance
ID and the property values associated with the instance.
To customize the number of rows to display at one time, click Preferences, modify
the row field value, and click Apply. The default is 25, with 1 being the minimum
number of rows to display and all records being the maximum. The total page and
returned instance counts are displayed in the Total field. You can navigate through
the records as follows:
v To view the next set of instances, click the forward arrow.
v To view the previous page of instances, click the back arrow.
From this page you can edit or perform the following actions on a selected
relationship instance in the table:
v To view more specific information for a selected relationship instance, click the
relationship instance ID; or select the radio button next to the relationship
instance ID and click Details.
v To create a new relationship instance, click Create, and enter the property value
information on the New Relationship Instance page. Click OK to save the new
relationship instance locally.
v To delete a relationship instance locally, select the radio button next to the
relationship instance ID and click Delete.
Use this page to view detailed information for the selected relationship instance
and to edit or perform an action on the relationship instance.
To return to the Relationship Instances page, click Relationship Instances from the
path.
Required security role for this task: When security and role-based authorization
are enabled, you must be logged in as a monitor, an operator, a configurator, or an
administrator to access this page. Monitors and configurators can only view the
information.
Each relationship instance includes the relationship name, relationship instance ID,
property names and values, participating roles, and role instance values.
From this page, you can edit the relationship instance property values, delete the
relationship instance, create new role instances, or delete existing role instances:
v To edit the relationship instance property values, modify the value in the
Property values field and click OK to save the changes locally. You can edit the
property values only if they have been previously defined for the relationship
instance.
v To delete the relationship instance, click Delete.
v To create a new role instance, locate the role for which you want to create a new
instance and click Create . The New Role Instance page will display for entering
information about the new role instance.
v To edit the role instance property value, click the selected role instance ID
v To delete a role instance locally, select the role instance ID; and click Delete.
v To discard any changes made and return to the Relationship Instances page,
click Cancel.
Relationship name: Specifies the URL of this relationship. This field is read-only.
Relationship instance ID: Specifies the instance ID for the relationship. This field is
read-only.
Property values: Displays the property values for the relationship instance,
including the property name, property object type, property default value and
property value. You can edit the value for the property in the Value field.
Relationship Export:
Use this page to export data from an existing static relationship to a relationship
instance data (.ri) or comma-separated values (.csv) file. Exporting an existing
relationship is useful when you want to incorporate a relationship from another
platform into your solution, but do not want to write code or use relationship
manager to add instance details one by one.
To view this page in the administrative console, click Integration Applications >
Relationship Manager. For the server you want to manage, click Relationships
File format:
Specifies whether the file exports to the relationship instance data (.ri) or
comma-separated values (.csv) format.
Relationship Import:
Use this page to import data for an existing static relationship, which is
represented in a relationship instance data (.ri) or comma-separated values (.csv)
file. Importing a relationship is useful when you want to incorporate an existing
relationship from one platform into a system that is running on another platform,
but do not want to write code or use relationship manager to add instance details
one by one.
To view this page in the administrative console, click Integration Applications >
Relationship Manager. For the server you want to manage, click Relationships
next to the relationship services MBean. In the Select column, click the button next
to the relationship name and click Import.
Specifies the file name of the relationship instance data (.ri) or comma-separated
values (.csv) file being imported.
Working with role instances: Use these pages to view and edit role instance details
and to create new role instances:
v Role Instance Detail
v New Role Instance
Use this page to view detailed information for the selected role instance and to edit
the role instance property values.
To view this page in the administrative console, click Integration Applications >
Relationship Manager > Relationships > select the radio button next to the
relationship name > Query > relationshipinstanceid > Details > roleinstanceid.
Required security role for this task: When security and role-based authorization
are enabled, you must be logged in as a monitor, an operator, a configurator, or an
administrator to access this page. Monitors and configurators can only view the
information.
Each role instance includes the role name, role element, key attributes, property
values, status, and logical state. You have the following options:
v Click OK to save the changes locally and return to the Relationship Instance
Detail page.
Role name: Specifies the URL of this role. This field is read-only.
Role element: Specifies the instance ID for the relationship. This field is read-only.
Key attributes: Displays the key attribute name and value of the business object
participating in the role instance. This field is read-only.
Property values: Displays a shorthand tabular list of properties for the role
instance, including the property name, property object type, property default value
and property value. You can edit the value for the property in the Value field if
properties are defined for the role instance. If no property values are defined, you
cannot set any values here.
Logical state: Specifies whether the selected role instance is active or inactive. If the
role instance is inactive, it will not be visible to the relationship service.
Use this page to create a new role instance for a relationship and to enter
information about the new role instance.
To view this page in the administrative console, click Integration Applications >
Relationship Manager > Relationships > select the radio button next to the
relationship name > Query > relationshipinstanceid > Details > roleinstanceid >
Create.
Required security role for this task: When security and role-based authorization
ae enabled, you must be logged in as an operator or an administrator to create
new role instances.
Enter the key attribute role property values in their respective Value fields. You
can only set the key attribute value when creating the role instance. However, you
can edit the property values later. You have the following options:
v Click OK to save the new role instance locally and return to the Relationship
Instance Detail page.
v Click Cancel to clear your information and return to the Relationship Instance
Detail page.
Role name: Specifies the URL of this role. This field is read-only.
Role element: Specifies the instance ID for the relationship. This field is read-only.
Key attributes: Enter the key attribute value of the business object participating in
the new role instance.
Property values: Enter the property information for the new role instance.
REST Services
Use the administrative console to enable Representational State Transfer (REST)
services that you want to use during runtime.
Use this page to select a service provider that you want to configure. A service
provider owns a group of Representational State Transfer (REST) services.
To view this administrative console page, click Services > REST services > REST
service providers.
Scope selection:
Designate a server or cluster where you have REST services enabled. You can select
a specific deployment target or all that have been configured in your environment.
Provider Application:
In the table, click a provider link to configure the endpoints for the REST services
managed by that provider.
Scope:
This field in the table displays the scope for the group of REST services managed
by the provider. This field is read-only.
Use this page to configure all Representational State Transfer (REST) services in the
selected service provider. Enable or disable each REST service, configure a
protocol, host and port, and modify the description, which describes the purpose
of each REST service.
To view this administrative console page, click Services > REST services > REST
service providers > name_of_provider.
Scope:
This field displays the scope for the selected service provider. This field is
read-only.
Provider application:
This field displays the name of the selected service provider. This field is read-only.
Protocol:
Select either http:// or https://, which will be used to form the endpoint address for
all the REST services that you enable. Configure a full URL path by selecting
http:// or https:// and then setting the host and port. In an environment with a load
balancer or proxy server between the browser and the Business Space and REST
services, make sure that what you designate here matches the browser URL for
accessing Business Space.
Type the host or virtual host that a client needs to communicate with the server or
cluster. Use a fully qualified host name. In an environment with a load balancer or
proxy server between the browser and the Business Space and REST services, make
sure that what you designate here matches the browser URL for accessing Business
Space.
Port:
Type the port for all REST services that you want to configure. For a load-balanced
environment, designate the port number that the client needs to communicate with
the cluster. In an environment with a load balancer or proxy server between the
browser and the Business Space and REST services, make sure that what you
designate here matches the browser URL for accessing Business Space.
Context root:
This field lists the context root for the REST service provider. This field is
read-only.
Enabled:
In the table, select the check box to enable the REST service endpoint. Clear the
check box to disable the REST service endpoint.
Type:
This column in the table lists a short description of the type of REST service that
you are configuring. This field is read-only.
Description:
For each endpoint in the table, modify the endpoint description, which describes
the purpose of the REST service.
URL:
This column in the table lists the full URL path for the REST endpoints. This field
is read-only.
REST services:
Use this page to configure all Representational State Transfer (REST) services in
your environment. Enable or disable each REST service and modify the
description, which describes the purpose of each REST service.
To view this administrative console page, click Services > REST services > REST
services.
Scope selection:
Designate a server or cluster where you have REST services enabled. You can select
a specific deployment target or all that have been configured in your environment.
Enabled:
Type:
This column in the table lists a description of each type of REST service that you
are configuring. This field is read-only.
Scope:
This column in the table lists the server or cluster for the REST services that you
are configuring. This field is read-only.
Provider Application:
This column in the table displays the service provider for the REST services that
you are configuring. A service provider owns a group of REST services. This field
is read-only.
Description:
For each service in the table, modify the description, which describes the purpose
of the REST service.
URL:
This column in the table lists the full URL path for the REST services. This field is
read-only.
REST Services:
Use this page to configure Representational State Transfer (REST) services for a
particular server or cluster or for BPEL processes or human tasks on a server or
cluster.
For REST services other than BPEL processes and human tasks:
v For servers, click: Servers > Server Types > WebSphere application servers >
name_of_server > Business Integration > REST Services.
v For clusters, click: Servers > Clusters > WebSphere application server clusters
> name_of_cluster > Business Integration > REST Services.
Note: If you want to configure all REST services in your environment, use the
REST services page available by clicking Services > REST services > REST
services.
Protocol:
Select either http:// or https://, which will be used to form the address for all the
REST services on this page. Configure a full URL path by selecting http:// or
https:// and then setting the host and port. In an environment with a load balancer
or proxy server between the browser and the Business Space and REST services,
make sure that what you designate here matches the browser URL for accessing
Business Space.
Type the host or virtual host that a client needs to communicate with the server or
cluster. Use a fully qualified host name. In an environment with a load balancer or
proxy server between the browser and the Business Space and REST services, make
sure that what you designate here matches the browser URL for accessing Business
Space. If you are using the Deployment Environment Configuration wizard, and
you leave the host and port fields empty, the values default to values of an
individual cluster member host and its HTTP port. For a load-balanced
environment, you must later change the default values to the virtual host name
and port of your environment.
Port:
Type the port for all REST services that you want to configure. For a load-balanced
environment, designate the port number that the client needs to communicate with
the cluster. In an environment with a load balancer or proxy server between the
browser and the Business Space and REST services, make sure that what you
designate here matches the browser URL for accessing Business Space. If you are
using the Deployment Environment Configuration wizard, and you leave the host
and port fields empty, the values default to values of an individual cluster member
host and its HTTP port. For a load-balanced environment, you must later change
the default values to the virtual host name and port of your environment.
Context root:
This field lists the context root for the REST service provider. This field is
read-only.
Enabled:
In the table, select the check box to enable the REST service. Clear the check box to
disable the REST service.
Type:
Description:
Modify the description, which describes the purpose of the REST service.
URL:
This field lists the full URL path for available REST services. This field is read-only.
SCA resources
View and modify the configuration of deployed SCA modules, including the
interfaces and bindings of imports and exports.
Use this page to view installed Service Component Architecture (SCA) modules,
their associated applications, and any process application context. SCA modules
encapsulate Advanced Integration services, so you can make changes to services
without affecting users of the service. To use the SCA module services, start the
associated application.
To view this administrative console page, click Applications > SCA Modules.
Important: Some SCA modules are associated with a process application; they
provide the Advanced Integration service functionality for that process application.
If an SCA module is associated to a process application, do not use this page in the
administrative console to manage its state. Instead, use the Process Admin Console.
The state of any SCA module in a process application is managed as part of the
overall process application state within the Process Admin Console.
To act on one or more of the listed items, select the check boxes next to the names
of the items that you want to act on, then use the buttons provided. The check
boxes are for the application associated with each SCA module. To start an SCA
module, you start the associated application.
To browse or change the properties of a listed item, select its name in the list.
To change what entries are listed, or to change what information is shown for
entries in the list, use the Filter settings.
General properties
The SCA Modules collection page displays the following general properties for
SCA modules:
Module
Specifies the short name of the module followed by a cell identifier. The
optional cell identifier uniquely identifies the SCA module in the cell. If the
SCA module is versioned, the version field shows the corresponding
version information.
Snapshot or version
If the SCA module is part of a process application, this column specifies
the snapshot that contains the module (for example, 6.0-BillingDispute
Snapshot (6.0BDS)).
State Description
Started Application is running.
Partial Start Application is in the process of changing from a Stopped state to a Started state.
Application is starting to run, but is not yet fully running.
Stopped Application is not running.
Partial Stop Application is in the process of changing from a Started state to a Stopped state.
Application has not stopped running yet.
Unavailable Status cannot be determined. An application with an unavailable status might be
running, but have an unavailable status because the server running the
administrative console cannot communicate with the server running the application.
Not applicable Application does not provide information about whether or not it is running.
Buttons
Use this page to view the Service Component Architecture (SCA) module details,
including its exports and imports, and to view and edit its export and import
bindings.
To view this administrative console page, click Applications > SCA Modules >
module_Name.
Important: Some SCA modules are associated with a process application; they
provide the Advanced Integration service functionality for that process application.
If an SCA module is associated to a process application, do not use this page in the
administrative console to manage its state. Instead, use the Process Admin Console.
The state of any SCA module in a process application is managed as part of the
overall process application state within the Process Admin Console.
Configuration tab: Specifies configuration properties for this object. These property
values are preserved even if the runtime environment is stopped and restarted.
This tab also provides a list of all imports and exports configured for the module,
as well as links for configuring business process properties, module properties, and
human tasks.
General properties:
Module:
Property Value
Data type Text
Required? No
Application name:
Specifies the name of the enterprise application associated with this module.
Property Value
Data type Text
Required? No
Snapshot or version:
Specifies the SCA module version if the module is versioned, or the snapshot name
and acronym if the module is part of a process application.
Cell ID:
Property Value
Data type Text
Required? No
Property Value
Data type Text
Required? Required if the module is part of a process
application
Valid values Process application or Toolkit
Process application:
Specifies the name and acronym of the process application that contains the
module. If this module is associated to a process application, use the Process
Admin Console to manage its state.
Property Value
Data type Text
Required? Required if the module is part of a process
application
Track:
Specifies the full name and acronym of the track associated with the process
application snapshot. Snapshots can have a track if track development is enabled
in the Process Center and can be applied for playback on the Process Center
Server. Snapshots deployed on a process server do not have tracks.
Property Value
Data type Text
Required? No
Description:
Property Value
Data type Text
Required? No
Indicates the module's runtime support. The module can be configured to run in
version 6 or version 7 mode.
Property Value
Data type Text
Required? No
Indicates the runtime business object framework support for this module. The
module can be configured to run in version 6 or version 7 mode. In version 7
mode, the business object framework can lazily or eagerly load business objects.
For details about business object framework versions, see the topic "Business object
framework in version 7.0" in the IBM Business Process Manager information center.
Property Value
Data type Text
Required? No
Additional Properties:
Property Description
Module properties The properties that are set for the module
Business processes The business processes that are contained in
the module
Human tasks The human tasks contained in the module
Related Items:
Displays links to pages where you can perform tasks related to the SCA module.
To view this administrative console page, click Applications > SCA modules >
Install.
Path to the new SCA application: Specifies the path to the new SCA application on
the local file system.
Map the application to server: Specifies that the current stand-alone server is to host
the SCA application.
Use this page to select the JNDI name for various properties of a JMS binding.
Use this page to select a JNDI name from the list of matching existing JNDI names.
If the JNDI name that you need is not listed, return to the previous page and enter
the JNDI name manually. You must ensure that the resource referred by the JNDI
name has been configured.
To open the JNDI name browser, click Applications > SCA Modules > [Content
pane] module_name > [Module components] Imports/Exports >
import_name/export_name > Binding > binding_name [JMS] > Browse.
When you click the Browse button on the JMS bindings configuration page, the
Browse JNDI names page is displayed. The page shows a list of artifacts of the
correct type that you can choose from. Choose one by selecting the corresponding
radio button. When you have selected an artifact, click Select to confirm this
choice. The page will close and return you to the binding detail page.
EJB bindings:
To view this administrative console page, click Applications > SCA modules >
[Content pane] module_name > [Module components] Exports > export_name >
Binding > binding_name [EJB] .
Specifies the type of data handler associated with the selected Enterprise JavaBeans
(EJB) export binding. Data handlers are reusable transformation logic that can be
invoked from data bindings and Java components.
Function selector:
Specifies the name of the function selector associated with the selected Enterprise
JavaBeans (EJB) export binding. A function selector selects an operation to invoke
on a component.
Specifies the JNDI name of this Enterprise JavaBeans (EJB) export binding.
Use this page to display the attributes of the selected Enterprise JavaBeans (EJB)
import binding and to update the JNDI name.
To view this administrative console page, click Applications > SCA modules >
[Content pane] module_name > [Module components] Imports > import_name >
Binding > binding_name [EJB] .
Specifies the type of data handler associated with the selected EJB import binding.
Data handlers are reusable transformation logic that can be invoked from data
bindings and Java components.
Function selector:
Specifies the name of the function selector associated with the selected EJB import
binding. A function selector selects an operation to invoke on a component.
Specifies the JNDI name of this EJB import binding. You can edit this name.
Use this page to view or modify the attributes of the selected Generic (non-JCA)
JMS export binding. The artifacts that your binding requires can be configured to
be created on the server at deployment time, or you can administer the Generic
JMS export binding to use artifacts that you created on the server.
Send Resources:
Choose the response connection factory that you want your Generic JMS export
binding to use. You must either type the JNDI name of the connection factory, or
you can use the Browse button to choose from a list of available connection
factories.
Property Value
Data type Text
Choose the send destination for the Generic JMS export binding. You must either
type the JNDI name of the send JMS destination, or you can use the Browse
button to choose from a list of available destinations.
The send JMS destination is where the response message will be sent, if not
superseded by the JMSReplyTo header field in the incoming message.
Property Value
Data type Text
Receive resources:
Listener Port:
Choose the listener port for your Generic JMS export binding. You can type in the
name of the port, or you can use the Browse button to see a list of available ports.
Property Value
Data type Text
Identifies the connection factory for the Generic JMS export binding.
The connection factory is used by the SCA JMS runtime when the send destination
is on a different Queue Manager than the receive destination.
Property Value
Data type Text
Identifies the receive destination for the Generic JMS export binding.
The destination shown here is the destination that was defined when the
application was developed. The defined destination is used by the system to create
a listener port when you deploy the application. The destination on which inbound
requests will be received is the one referenced in the listener port.
Note that if you create your own resources or modify the generated listener port to
use a different destination, this field still reports the original value that was
defined in IBM Integration Designer.
In all instances, the destination where inbound requests are placed is the one
found in the listener port, not necessarily the one reported in this field.
Property Value
Data type Text
Advanced resources:
Identifies the callback destination for the Generic JMS export binding. The callback
destination is determined by the choice of response connection factory.
The callback JMS destination is an SCA JMS System destination used to store
correlation information. Do not read from or write to this destination.
Property Value
Data type Text
Use this page to view or modify the attributes of the selected Generic (non-JCA)
JMS import binding. The artifacts that your binding requires can be configured to
be created on the server at deployment time, or you can administer the Generic
JMS import binding to use artifacts that you created on the server.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Imports > import_name >
Binding > binding_name [generic JMS].
Send resources:
Choose the connection factory that you want your Generic JMS import binding to
use. You must either type the JNDI name of the connection factory, or you can use
the Browse button to choose from a list of available connection factories.
The connection factory is used by the system to obtain connections to the JMS
provider in order to send a request.
Choose the send destination for the Generic JMS import binding. You must either
type the JNDI name of the send JMS destination, or you can use the Browse
button to choose from a list of available destinations.
The send JMS destination is where the request, or outgoing message, will be sent.
Property Value
Data type Text
Receive resources:
Listener Port:
Choose the listener port for your Generic JMS import binding. You can type in the
name of the port, or you can use the Browse button to see a list of available ports.
The listener port is used to connect the import to the Generic JMS provider and the
destination where incoming or response messages are received.
Property Value
Data type Text
Identifies the response connection factory for the Generic JMS import binding.
The response connection factory is used by the SCA JMS runtime when the send
destination is on a different Queue Manager than the receive destination.
Property Value
Data type Text
Identifies the receive destination for the Generic JMS import binding. The receive
destination is determined by the choice of response connection factory.
The receive JMS destination is where the response or incoming message should be
placed.
The destination reported here is the destination that was defined when the
application was developed. The defined destination is used by the system to create
a listener port when you deploy the application. The destination from which
messages will be received is the one referenced from inside the listener port.
Note that if you create your own resources or modify the generated listener port to
point to a different destination, this field still reports the original value that was
defined in IBM Integration Designer.
Property Value
Data type Text
Advanced resources:
Identifies the callback destination for the Generic JMS import binding. The callback
destination is determined by the choice of response connection factory.
The callback JMS destination is an SCA JMS System destination used to store
correlation information. Do not read from or write to this destination.
Property Value
Data type Text
HTTP bindings:
Use this page to change the configuration of an HTTP export binding used by a
Service Component Architecture (SCA) module on either a binding or method
level. The method-level configuration takes precedence over the binding-level
configuration.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Exports > export_name >
Binding > binding_name [HTTP]
Important: If you redeploy the module that contains this export, you will lose any
configuration done through this page unless you have already updated the original
module using IBM Integration Designer.
Binding Scope:
Specifies the values of this HTTP export used by methods that do not have specific
configurations.
Tip: If some of the methods should retain the current settings, use the Method
Scope tab to configure those parameters before changing the parameters at this
scope.
Context Path:
The context path of the exposed Service Component Architecture (SCA) export.
This path, combined with the virtual host and context root, forms the URL that is
called by an HTTP client. The value cannot be changed.
Lists the methods and the current configuration for the methods. You can set
whether the method is pingable and the return code for the method.
Method
The name of the method. The methods are GET, POST, PUT, DELETE, TRACE,
OPTIONS, and HEAD.
Pingable
Whether or not an HTTP client can ping the method. When selected, you must
specify the Return code the binding returns to the client. The default for this is
unchecked.
Return code
An integer returned when an HTTP client pings the method.
Transfer Encoding:
Important: If you set this parameter to chunked, Content Encoding is set to identity
and you will be unable to change Content Encoding.
Property Value
Data type String
Default The value originally configured for this
binding
Range chunked or identity
Content Encoding:
Specifies how the content that traverses the binding is encoded. Choose either
gzip, x-gzip, deflate, or identity.
Property Value
Data type Array
Units String
Default The value originally configured for this
binding
Range gzip, x-gzip, deflate, or identity
Method Scope:
Note: Method scope settings take precedence over binding scope settings.
Select method:
Context Path:
The context path of the exposed Service Component Architecture (SCA) export.
This path, combined with the virtual host and context root, forms the URL that is
called by an HTTP client. The value cannot be changed.
HTTP Methods:
Lists the methods and the current configuration for the methods. You can set
whether the method is pingable and the return code for the method.
Method
The name of the method. The methods are GET, POST, PUT, DELETE, TRACE,
OPTIONS, and HEAD.
Pingable
Whether or not an HTTP client can ping the method. When selected, you must
specify the Return code the binding returns to the client. The default for this is
unchecked.
Return code
An integer returned when an HTTP client pings the method.
Transfer Encoding:
Important: If you set this parameter to chunked, Content Encoding is set to identity
and you will be unable to change Content Encoding.
Property Value
Data type String
Default The value originally configured for this
binding
Range chunked or identity
Content Encoding:
Specifies how the content that traverses the binding is encoded. Choose either
gzip, x-gzip, deflate, or identity.
Property Value
Data type Array
Units String
Default The value originally configured for this
binding
Range gzip, x-gzip, deflate, or identity
Displays links to pages where you can perform tasks related to the HTTP export
binding.
Link Task
Manage Export Binding Web Module Configure deployment-specific information
for the Web module
Context Root Configure the context root for the Web
module
Virtual Hosts Specify the virtual host where you want to
install the Web modules that are contained in
your application
JSP reload options for Web modules Specify JSP reload options for Web modules
Session management Configure session manager properties to
control the behavior of HTTP session support
Use this page to change the configuration of an HTTP import used by a Service
Component Architecture (SCA) module on either the binding level or the method
level. The method-level configuration takes precedence over the binding-level
configuation.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Imports > import_name >
Binding > binding_name [HTTP]
Important: If you redeploy the module that contains this import, you will lose any
configuration done through this panel unless you have already updated the
original module using IBM Integration Designer.
Binding Scope:
Specifies the values of this HTTP import used by methods that do not have
specific configurations.
Tip: If some of the methods should retain the current settings, use the Method
Scope tab to configure those parameters before changing the parameters at this
scope.
Endpoint URL:
Property Value
Data type String
Default The URI originally specified when the
module was created in IBM Integration
Designer.
Range Any valid URI
Property Value
Data type String
Default The method originally specified when the
module was created in IBM Integration
Designer.
Range Any valid method in the module
HTTP version:
Property Value
Data type String
Default 1.1
Range 1.0 or 1.1
Specifies the number of times the request is retried when the system receives an
error response.
Property Value
Data type Integer
Units Retries
Default 0 (after failure, no attempts are made)
Specifies the authentication alias to use with the HTTP server on this binding. To
choose the authentication alias, select the alias name from the list. To change the
attributes of a selected authentication alias, click Edit. To create a new
authentication alias, click New.
Property Value
Data type Array
Units Strings
Default The alias originally configured for this
binding
SSL Authentication:
Specifies the Secure Sockets Layer (SSL) configuration to use for this binding. To
edit an existing configuration, select the name from the list and click Edit. To
create a new configuration, click New.
Property Value
Data type Array
Units String
Default The alias originally configured for this
binding
Important: If you set this parameter to chunked, Content Encoding is set to identity
and you will be unable to change Content Encoding.
Property Value
Data type String
Default The value originally configured for this
binding
Range chunked or identity
Content Encoding:
Specifies how the content that traverses the binding is encoded. Choose either
gzip, x-gzip, deflate, or identity.
Property Value
Data type Array
Units String
Default The value originally configured for this
binding
Range gzip, x-gzip, deflate, or identity
Specifies the settings for bindings that do not require security authorization for
access.
Proxy Host:
Specifies the host name or IP address of an HTTP proxy server through which to
connect to the endpoint URL.
Property Value
Data type String
Default None
Proxy Port:
Specifies the port used to connect to an HTTP proxy server for this binding.
Property Value
Data type Integer
Default 80
Proxy Credentials:
Property Value
Data type Array
Units String
Default The value originally configured for this
binding
Specifies a list of hosts on this binding that do not use proxies. Enter each host on
a separate line (use the Enter key). To add a host to the list, type the host at the
end of the list, separating it from the previous entry by clicking the Enter key. To
remove a host from the list, delete the host from the list.
Property Value
Data type Array
Units String
Default The values already configured for this
binding
Specifies the settings for bindings that require authorization for access.
Proxy Host:
Specifies the host name or IP address of an HTTP proxy server through which to
connect to the endpoint URL.
Property Value
Data type String
Default None
Proxy Port:
Specifies the port used to connect to an HTTP proxy server for this binding.
Property Value
Data type Integer
Default 443
Specifies a list of hosts on this binding that do not use proxies. Enter each host on
a separate line (use the Enter key). To add a host to the list, type the host at the
end of the list, separating it from the previous entry by clicking the Enter key. To
remove a host from the list, delete the host from the list.
Property Value
Data type Array
Units String
Proxy Credentials:
Specifies the Java2 Connectivity (J2C) authentication alias to use for the proxy
settings. To change an existing alias, select the alias from the list and click Edit. To
add a new alias, click New.
Property Value
Data type Array
Units String
Default The value originally configured for this
binding
Specifies the time, in seconds, that the binding waits to read data while receiving a
response message. Setting this field to 0 causes the binding to wait indefinitely.
Property Value
Data type Integer
Units Seconds
Default 0
Method Scope:
Use this page to specify the configuration for specific methods on this binding.
Select method:
Property Value
Data type Array
Units String
Default None
Endpoint URL:
Property Value
Data type String
Default The URI originally specified when the
module was created in IBM Integration
Designer.
Range Any valid URI
HTTP method:
HTTP version:
Property Value
Data type String
Default 1.1
Range 1.0 or 1.1
Specifies the number of times the request is retried when the system receives an
error response.
Property Value
Data type Integer
Units Retries
Default 0 (after failure, no attempts are made)
Specifies the authentication alias to use with the HTTP server on this binding. To
choose the authentication alias, select the alias name from the list. To change the
attributes of a selected authentication alias, click Edit. To create a new
authentication alias, click New.
Property Value
Data type Array
Units Strings
Default The alias originally configured for this
binding
SSL Authentication:
Specifies the Secure Sockets Layer (SSL) configuration to use for this binding. To
edit an existing configuration, select the name from the list and click Edit. To
create a new configuration, click New.
Property Value
Data type Array
Units String
Default The alias originally configured for this
binding
Transfer Encoding:
Important: If you set this parameter to chunked, Content Encoding is set to identity
and you will be unable to change Content Encoding.
Property Value
Data type String
Default The value originally configured for this
binding
Range chunked or identity
Content Encoding:
Specifies how the content that traverses the binding is encoded. Choose either
gzip, x-gzip, deflate, or identity.
Property Value
Data type Array
Units String
Default The value originally configured for this
binding
Range gzip, x-gzip, deflate, or identity
Specifies the settings for bindings that do not require security authorization for
access.
Proxy Host:
Specifies the host name or IP address of an HTTP proxy server through which to
connect to the endpoint URL.
Property Value
Data type String
Default None
Proxy Port:
Specifies the port used to connect to an HTTP proxy server for this binding.
Property Value
Data type Integer
Default 80
Proxy Credentials:
Property Value
Data type Array
Units String
Default The value originally configured for this
binding
Specifies a list of hosts on this binding that do not use proxies. Enter each host on
a separate line (use the Enter key). To add a host to the list, type the host at the
end of the list, separating it from the previous entry by clicking the Enter key. To
remove a host from the list, delete the host from the list.
Property Value
Data type Array
Units String
Default The values already configured for this
binding
Specifies the settings for bindings that require authorization for access.
Proxy Host:
Specifies the host name or IP address of an HTTP proxy server through which to
connect to the endpoint URL.
Property Value
Data type String
Default None
Proxy Port:
Specifies the port used to connect to an HTTP proxy server for this binding.
Property Value
Data type Integer
Default 443
Specifies a list of hosts on this binding that do not use proxies. Enter each host on
a separate line (use the Enter key). To add a host to the list, type the host at the
end of the list, separating it from the previous entry by clicking the Enter key. To
remove a host from the list, delete the host from the list.
Property Value
Data type Array
Units String
Proxy Credentials:
Specifies the Java2 Connectivity (J2C) authentication alias to use for the proxy
settings. To change an existing alias, select the alias from the list and click Edit. To
add a new alias, click New.
Property Value
Data type Array
Units String
Default The value originally configured for this
binding
Specifies the time, in seconds, that the binding waits to read data while receiving a
response message. Setting this field to 0 causes the binding to wait indefinitely.
Property Value
Data type Integer
Units Seconds
Default 0
JMS bindings:
Use this page to view or modify the attributes of the selected JMS export binding
or to manage the state of endpoints .
v Use the Configuration tab to edit the JMS export binding settings.
v Use the Runtime tab to manage the state of all receiving endpoints defined for
the export. You can pause and then resume active endpoints.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Exports > export_name >
Binding > binding_name [JMS].
Configuration tab:
Choose the connection factory that you want your JMS export binding to use. You
can either type the JNDI name of the connection factory, or you can use the
Browse button to choose from a list of available connection factories.
The connection factory is used by the system to connect to a bus to send the
response message.
Choose the send destination for the JMS export binding. You can either type the
JNDI name of the send JMS destination, or you can use the Browse button to
choose from a list of available destinations.
The send JMS destination is where the response message will be sent, if not
superseded by the JMSReplyTo header field in the incoming message.
Property Value
Data type Text
Default (if configured so that the send JMS module_name/export_name_SEND_D
destination is generated on the server at
deployment time)
Choose the failed event replay connection factory for the JMS export binding. You
can either type the JNDI name of the failed event replay connection factory, or you
can use the Browse button to choose from a list of available connection factories.
The failed event replay connection factory is used by the system to create a
connection to the JMS provider in order to replay failed events.
Property Value
Data type Text
Choose the activation specification for the JMS export binding. You can either type
the JNDI name of the activation specification, or you can use the Browse button to
choose from a list of available activation specifications.
The activation specification is used to connect the JMS export to the bus and
destination on which request messages are received.
Property Value
Data type Text
The destination reported here is the destination that was defined when the
application was developed. The defined destination is used by the system to create
an activation specification when you deploy the application. The destination from
which messages will be received is the one referenced from inside the activation
specification.
In all instances, the destination where incoming or request messages are placed is
the one found in the activation specification, not necessarily the one reported in
this field.
The callback JMS destination is an SCA JMS System destination used to store
correlation information. Do not read from or write to this destination.
Runtime tab:
Use the Runtime tab to manage the state of all receiving endpoints defined for the
export. You can pause and then resume active endpoints.
Note: The Runtime tab applies only to applications on Version 7 of the runtime
environment.
The node, server, and status for each endpoint are listed in the Receiving
Endpoints table. The value for the Status column can be Active, Paused, or
Stopped.
Use this page to view or modify the attributes of the selected JMS import binding
or to manage the state of endpoints. The artifacts that your import binding requires
can be configured to be created on the server at deployment time, or you can
administer the JMS import binding to use artifacts that you created on the server.
v Use the Configuration tab to edit the JMS import binding settings.
v Use the Runtime tab to manage the state of all receiving endpoints defined for
the import. You can pause and then resume active endpoints.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Imports > import_name >
Binding > binding_name [JMS].
Configuration tab:
Choose the connection factory that you want your JMS import binding to use. You
can either type the JNDI name of the connection factory, or you can use the
Browse button to choose from a list of available connection factories.
Property Value
Data type Text
Choose the send destination for the JMS import binding. You can either type the
JNDI name of the send JMS destination, or you can use the Browse button to
choose from a list of available destinations.
The send JMS destination is where the request, or outgoing message, will be sent.
Property Value
Data type Text
Choose the failed event replay connection factory for the JMS import binding. You
can either type the JNDI name of the failed event replay connection factory, or you
can use the Browse button to choose from a list of available connection factories.
The failed event replay connection factory is used to create a connection to the JMS
provider in order to replay failed events.
Property Value
Data type Text
Choose the activation specification for the JMS import binding. You can either type
the JNDI name of the activation specification, or you can use the Browse button to
choose from a list of available activation specifications.
Property Value
Data type Text
The destination reported here is the destination that was defined when the
application was developed. The defined destination is used by the system to create
an activation specification when you deploy the application. The destination from
which messages will be received is the one referenced from inside the activation
specification.
Note that if you create your own resources or modify the generated activation
specification to point to a different destination, this field still reports the original
value that was defined in IBM Integration Designer.
The callback JMS destination is an SCA JMS System destination used to store
correlation information. Do not read from or write to this destination.
Runtime tab:
Use the Runtime tab to manage the state of all receiving endpoints defined for the
import. You can pause and then resume active endpoints.
Note: The Runtime tab applies only to applications on Version 7 of the runtime
environment.
The node, server, and status for each endpoint are listed in the Receiving
Endpoints table. The value for the Status column can be Active, Paused, or
Stopped.
SCA bindings:
Use this page to display the attributes of the selected Service Component
Architecture (SCA) export binding.
To view this administrative console page, click Applications > SCA modules >
[Content pane] module_name > [Module components] Exports > export_name>
Binding > binding_name [SCA].
The module name, export name, and export interface name are displayed. The
page is read-only.
Module:
Specifies the module that contains the export with this export binding.
Snapshot or version:
Specifies the SCA module version if the module is versioned, or the snapshot name
and acronym if the module is part of a process application.
Property Value
Data type Integer
Required? Yes, if module is versioned or is part of a
process application
Cell ID:
Property Value
Data type Text
Required? Required if the module is part of a process
application
Valid values Process application or Toolkit
Process application:
Specifies the full name and acronym for the process application that contains the
module. If this module is associated to a process application, use the Process
Admin Console to manage its state.
Property Value
Data type Text
Required? Required if the module is part of a process
application
Track:
Specifies the full name and acronym of the track associated with the process
application snapshot. Snapshots can have a track if track development is enabled
in the Process Center and can be applied for playback on the Process Center
Server. Snapshots deployed on a process server do not have tracks.
Property Value
Data type Text
Required? No
Export:
Export interfaces:
Contains the list of the export interfaces for the export of this module.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Imports > import_name >
Binding > binding_name [SCA].
The details shown include information about the Advanced Integration service
provider that the module is using. The administrative console refers to the
Advanced Integration service provider as the target. The details displayed include
the target module of the selected export.
Configuration tab:
Specifies configuration properties for this object. These property values are
preserved even if the runtime environment is stopped and restarted.
General properties:
Module:
Specifies the module that contains the import with this import binding.
Property Value
Data type Text
Version:
Snapshot or version:
Specifies the SCA module version if the module is versioned, or the snapshot name
and acronym if the module is part of a process application.
Property Value
Data type Integer
Required? Yes, if module is versioned or is part of a
process application
Cell ID:
Property Value
Data type Text
Required? Required if the module is part of a process
application
Valid values Process application or Toolkit
Process application:
Specifies the full name and acronym for the process application that contains the
module. If this module is associated to a process application, use the Process
Admin Console to manage its state.
Property Value
Data type Text
Required? Required if the module is part of a process
application
Track:
Specifies the full name and acronym of the track associated with the process
application snapshot. Snapshots can have a track if track development is enabled
in the Process Center and can be applied for playback on the Process Center
Server. Snapshots deployed on a process server do not have tracks.
Property Value
Data type Text
Required? No
Import:
Property Value
Data type Text
Import Interfaces:
Target:
Related Items:
Provides links to pages where you can perform tasks related to the SCA module.
Link Task
Owning module Configure the details of the owning SCA
module
Target module Configure the details of the target SCA
module
Target export binding View the attributes of the selected SCA
export binding
Target export interfaces Provides links to the target export interfaces
panel (WSDL interface definition panel)
Use this page to view or modify the attributes of the selected Web service export
binding.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Exports > export_name >
Binding > binding_name [Web service].
Configuration tab:
Specifies the configuration properties for this object. These property values are
preserved even if the runtime environment is stopped and restarted.
General properties:
Service:
Port:
Property Value
Data type Text
Required? No
Endpoint address:
Property Value
Data type Text
Required? No
Displays links to pages where you can perform tasks related to the Web service
export binding.
Link Task
Manage Export Binding Web Module Configure deployment-specific information
for the Web module
Context Root Configure the context root for the Web
module
Virtual Hosts Specify the virtual host where you want to
install the Web modules that are contained in
your application
JSP reload options for web modules Specify JSP reload options for Web modules
Session management Configure session manager properties
specific to this application
Related Properties:
Displays links to pages where you can view the WSDL definition for this interface
or view all interfaces.
Use this page to view or modify the attributes of the selected Web service import
binding.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Imports > import_name >
Binding > binding_name [Web service].
Configuration tab:
General properties:
Service:
Property Value
Data type Text
Required? No
Port:
Property Value
Data type Text
Required? No
Endpoint:
If you change the endpoint, ensure that the value is a well-formed URL (for
example: http://localhost:9080/RealtimeService/services/
RealtimeServiceSOAP).
Property Value
Data type Text
Required? Yes
Use this page to view the attributes of the selected Web service (JAX-WS) export
binding.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Exports > export_name >
Binding > binding_name [JAX-Web service].
Configuration tab:
Specifies the configuration properties for this object. These property values are
preserved even if the runtime environment is stopped and restarted.
General properties:
Service:
Port:
Property Value
Data type Text
Required? No default value
Endpoint address:
Property Value
Data type Text
Required? No default value
You can attach policy sets using the administrative console or IBM Integration
Designer, but you can assign policy set bindings only on the administrative
console.
Note: If there is more than one Web service (JAX-WS) export binding in a module
with an associated WS-ReliableMessaging policy, the policy details and policy
binding must match across the exports.
For more information on configuring policy sets for Web service applications, see
the “Web services policy sets” topic in the WebSphere Application Server
information center. Note that the steps for configuring Web service applications for
use with SCA exports is slightly different from the steps provided. Adjust your
configuration accordingly.
Use this page to view the attributes of the selected Web service (JAX-WS) import
binding.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Imports > import_name >
Binding > binding_name [JAX-Web service].
Specifies the configuration properties for this object. These property values are
preserved even if the runtime environment is stopped and restarted.
General properties:
Service:
Property Value
Data type Text
Required? No default value
Port:
Property Value
Data type Text
Required? No
The address where the bound Web service is available. Click Edit to change the
target endpoint address.
Property Value
Data type Text
Required? No
You can attach policy sets using the administrative console or IBM Integration
Designer, but you can assign policy set bindings only on the administrative
console.
Note: If there is more than one Web service (JAX-WS) import binding in a module
with an associated WS-ReliableMessaging policy, the policy details and policy
binding may be different across the imports.
For more information on configuring policy sets for Web service applications, see
the “Web services policy sets” topic in the WebSphere Application Server
information center. Note that the steps for configuring Web service applications for
use with SCA imports is slightly different from the steps provided. Adjust your
configuration accordingly.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Imports > import_name >
Binding > binding_name [JAX-Web service] > Edit endpoint.
Configuration tab:
Specifies the configuration properties for this object. These property values are
preserved even if the runtime environment is stopped and restarted.
General properties:
Note: You must restart the application to use the revised endpoint value.
Property Value
Data type Text
Required? No default value
Restrictions The typed value must be a well-formed URL
Example http://localhost:9080/RealtimeService/
services/RealtimeServiceSOAP
Use this page to expand the WSDL definition and view its content.
Buttons
You can use the buttons described in the following table to control how much of
the WSDL definition the page displays.
Button Action
Collapse All Collapse the WSDL definition to hide its content.
Expand All Expand the WSDL definition to view its content.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Exports > export_name >
Binding > binding_name [MQ JMS].
Configuration tab:
Choose the response connection factory that you want your WebSphere MQ JMS
export binding to use. You can either type the JNDI name of the connection factory,
or you can use the Browse button to choose from a list of available connection
factories.
Property Value
Data type Text
Choose the send destination for the WebSphere MQ JMS export binding. You can
either type the JNDI name of the send MQ JMS destination, or you can use the
Browse button to choose from a list of available destinations.
The send MQ JMS destination is where the response message will be sent, if not
superseded by the JMSReplyTo header field in the incoming message.
Property Value
Data type Text
Choose the failed event replay connection factory for the WebSphere MQ JMS
export binding. You can either type the JNDI name of the failed event replay
connection factory, or you can use the Browse button to choose from a list of
available connection factories.
The failed event replay connection factory is used to create a connection to the
WebSphere MQ JMS provider in order to replay failed events.
Property Value
Data type Text
Choose the activation specification for your WebSphere MQ JMS export binding.
You can either type the JNDI name of the activation specification, or you can use
the Browse button to choose from a list of available activation specifications.
The activation specification is used to connect the JMS export to the bus and
destination on which request messages are received.
Property Value
Data type Text
Shows the connection factory for the WebSphere MQ JMS export binding. This
field cannot be edited.
The connection factory is used by the SCA JMS runtime environment when the
send destination is on a different Queue Manager than the receive destination. This
field is read-only.
Property Value
Data type Text
Identifies the receive destination for the WebSphere MQ JMS export binding.
The destination shown here is the destination that was defined when the
application was developed. The defined destination is used by the system to create
an activation specification when you deploy the application. The destination on
which inbound requests will be received is the one referenced from inside the
activation specification.
Note that if you create your own resources or modify the generated activation
specification to use a different destination, this field still reports the original value
that was defined in IBM Integration Designer.
In all instances, the destination where inbound requests are placed is the one
found in the activation specification, not necessarily the one reported in this field.
Property Value
Data type Text
Identifies the callback destination for the WebSphere MQ JMS export binding. The
callback destination is determined by the choice of response connection factory.
The callback JMS destination is an SCA JMS System destination used to store
correlation information. Do not read from or write to this destination.
Runtime tab:
Use the Runtime tab to manage the state of all receiving endpoints defined for the
export. You can pause and then resume active endpoints.
Note: The Runtime tab applies only to applications on Version 7 of the runtime
environment.
The node, server, and status for each endpoint are listed in the Receiving
Endpoints table. The value for the Status column can be Active, Paused, or
Stopped.
Use this page to view or modify the attributes of the selected WebSphere MQ JMS
import binding or to manage the state of endpoints. The artifacts that your binding
requires can be configured to be created on the server at deployment time, or you
can administer the WebSphere MQ JMS import binding to use artifacts that you
created on the server.
v Use the Configuration tab to edit the WebSphere MQ JMS import binding
settings.
v Use the Runtime tab to manage the state of all receiving endpoints defined for
the import. You can pause and then resume active endpoints.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Imports > import_name >
Binding > binding_name [MQ JMS].
Configuration tab:
Choose the connection factory that you want your WebSphere MQ JMS import
binding to use. You can either type the JNDI name of the connection factory, or
you can use the Browse button to choose from a list of available connection
factories.
Property Value
Data type Text
The send MQ JMS destination is where the request, or outgoing message, will be
sent.
Property Value
Data type Text
Choose the failed event replay connection factory for the WebSphere MQ JMS
import binding. You can either type the JNDI name of the failed event replay
connection factory, or you can use the Browse button to choose from a list of
available connection factories.
The failed event replay connection factory is used by the system to create a
connection to the WebSphere MQ JMS provider in order to replay failed events.
Property Value
Data type Text
Choose the activation specification for your WebSphere MQ JMS import binding.
You can either type the JNDI name of the activation specification, or you can use
the Browse button to choose from a list of available activation specifications.
Property Value
Data type Text
Shows the response connection factory for the WebSphere MQ JMS import binding.
This field cannot be edited.
The response connection factory is used by the SCA JMS runtime environment
when the send destination is on a different Queue Manager than the receive
destination.
Property Value
Data type Text
Identifies the receive destination for the WebSphere MQ JMS import binding. The
receive destination is determined by the choice of response connection factory.
The destination reported here is the destination that was defined when the
application was developed. The defined destination is used by the system to create
an activation specification when you deploy the application. The destination from
which messages will be received is the one referenced from inside the activation
specification.
Note that if you create your own resources or modify the generated activation
specification to point to a different destination, this field still reports the original
value that was defined in IBM Integration Designer.
In all instances, the destination where incoming or request messages are placed is
the one found in the activation specification, not necessarily the one reported in
this field.
Property Value
Data type Text
Identifies the callback destination for the WebSphere MQ JMS import binding. The
callback destination is determined by the choice of response connection factory.
The callback JMS destination is an SCA JMS System destination used to store
correlation information. Do not read from or write to this destination.
Property Value
Data type Text
Runtime tab:
Use the Runtime tab to manage the state of all receiving endpoints defined for the
import. You can pause and then resume active endpoints.
Note: The Runtime tab applies only to applications on Version 7 of the runtime
environment.
The node, server, and status for each endpoint are listed in the Receiving
Endpoints table. The value for the Status column can be Active, Paused, or
Stopped.
WebSphere MQ bindings:
Use this page to view or modify the attributes of the selected native WebSphere
MQ export binding or to manage the state of endpoints. The artifacts that your
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Exports > export_name >
Binding > binding_name [native MQ].
Configuration tab:
Choose the connection factory that you want your native WebSphere MQ export
binding to use. You can either type the JNDI name of the connection factory, or
you can use the Browse button to choose from a list of available connection
factories.
Property Value
Data type Text
Choose the send destination for the native WebSphere MQ export binding. You can
either type the JNDI name of the send MQ destination, or you can use the Browse
button to choose from a list of available destinations.
The send MQ destination is where the response message will be sent, if not
superseded by the ReplyToQ and ReplyToQMgr MQMD header fields in the
incoming message.
Property Value
Data type Text
Choose the activation specification for your native WebSphere MQ export binding.
You can either type the JNDI name of the activation specification, or you can use
the Browse button to choose from a list of available activation specifications.
Property Value
Data type Text
The destination shown here is the destination that was defined when the
application was developed. The defined destination is used by the system to create
an activation specification when you deploy the application. The destination on
which inbound requests will be received is the one referenced from inside the
activation specification.
Note that if you create your own resources or modify the generated activation
specification to use a different destination, this field still reports the original value
that was defined in IBM Integration Designer.
In all instances, the destination where inbound requests are placed is the one
found in the activation specification, not necessarily the one reported in this field.
Property Value
Data type Text
Identifies the callback destination for the native WebSphere MQ export binding.
The callback destination is determined by the choice of response connection
factory.
Property Value
Data type Text
Runtime tab:
Use the Runtime tab to manage the state of all receiving endpoints defined for the
export. You can pause and then resume active endpoints.
Note: The Runtime tab applies only to applications on Version 7 of the runtime
environment.
The node, server, and status for each endpoint are listed in the Receiving
Endpoints table. The value for the Status column can be Active, Paused, or
Stopped.
Use this page to view or modify the attributes of the selected native WebSphere
MQ import binding or to manage the state of endpoints. The artifacts that your
binding requires can be configured to be automatically created at deployment time,
or you can administer the native WebSphere MQ import binding to use artifacts
that you created on the server.
v Use the Configuration tab to edit the WebSphere MQ import binding settings.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > [Module components] Imports > import_name >
Binding > binding_name [native MQ].
Configuration tab:
Choose the connection factory that you want your native WebSphere MQ import
binding to use. You can either type the JNDI name of the connection factory, or
you can use the Browse button to choose from a list of available connection
factories.
Property Value
Data type Text
Choose the send destination for the native WebSphere MQ import binding. You
can either type the JNDI name of the send MQ destination, or you can use the
Browse button to choose from a list of available destinations.
The send MQ destination is where the request, or outgoing message, will be sent.
Property Value
Data type Text
Choose the activation specification for your native WebSphere MQ import binding.
You can either type in the JNDI name of the activation specification, or you can
use the Browse button to choose from a list of available activation specifications.
Property Value
Data type Text
Identifies the receive destination for the native WebSphere MQ import binding.
Note that if you create your own resources or modify the generated activation
specification to point to a different destination, this field still reports the original
value that was defined in IBM Integration Designer.
In all instances, the destination where incoming or request messages are placed is
the one found in the activation specification, not necessarily the one reported in
this field.
Property Value
Data type Text
Identifies the callback destination for the native WebSphere MQ import binding.
Property Value
Data type Text
Runtime tab:
Use the Runtime tab to manage the state of all receiving endpoints defined for the
import. You can pause and then resume active endpoints.
Note: The Runtime tab applies only to applications on Version 7 of the runtime
environment.
The node, server, and status for each endpoint are listed in the Receiving
Endpoints table. The value for the Status column can be Active, Paused, or
Stopped.
Important: Although you can modify the data sources used for the SCA
configuration in deployment manager or custom profiles, you cannot modify (or
unconfigure) existing SCA support. In stand-alone server profiles, you cannot
modify the configured data sources, either.
By default, this option is cleared if you are using a deployment manager or custom
(managed node) profile. If you are using a stand-alone server profile, SCA support
is already configured and this check box is selected.
When you select Support the Service Component Architecture components, the
Bus Member Location panel becomes active so you can specify where to host SCA
application destinations and messaging engines.
Specifies whether the SCA system bus (and, optionally, the SCA application bus)
destinations are hosted on the local deployment target or on a remote target. Use
the Local and Remote radio buttons in this panel to indicate the appropriate
location.
Bus members are always hosted locally for stand-alone server profiles.
Local:
Select this radio button if you want to create and host SCA applications and their
required messaging engines and Java Message Service (JMS) queue destinations on
the current cluster or server.
When you select Local, the System Bus Member panel and Application Bus
Member panel become active so you can create or modify the data source used for
each service integration bus.
Remote:
Select this radio button to host SCA applications on the local cluster or server
while using a remote cluster or server to host the JMS queue destinations and
messaging engines.
If you select Remote, use the associated drop-down list or New button to specify
the remote location you want to use. The drop-down list shows all deployment
targets that are configured as members of the SCA system bus, and the New
button opens the Browse Deployment Target page so you can add a single server
or cluster target to that list.
Note: If you use the Browse Deployment Target page to add a new target to the
list and navigate away from the Service Component Architecture page before
completing your SCA configuration, that target is removed from the list.
After you select a remote deployment target, the page displays the appropriate
information for that target in the System Bus Member panel. If the application bus
is enabled on the remote deployment target, the page also updates the table in the
Application Bus Member panel. If the application bus is not enabled on the
deployment target, you can enable it.
Many fields in this panel contain default values based on the IBM Business Process
Manager and WebSphere Enterprise Service Bus Common database (by default,
WPRCSDB) configured on the deployment target you have selected. You can accept
these default values or edit them.
In addition to the fields, the System Bus Member panel contains two buttons:
v Edit: Use the Edit button to modify the data source used for the system bus.
Clicking it opens the data source configuration page.
v Test Connection: Use the Test Connection button to verify that the data source
can contact and authenticate with the database. If a component manages the
data source, this test also verifies whether the data can be reached from the
configured scope and, if applicable, whether the schema is configured correctly.
Refer to the following sections for detailed descriptions of each field in this panel.
Note that you cannot edit these fields if you are using a stand-alone server profile.
Database name:
Specifies the database name used for this data source. The value must be the name
of an existing database.
If you have not yet configured a data source, Database name contains a default
value based on the detected database configuration (for example, WPRCSDB for
IBM Business Process Manager and WebSphere Enterprise Service Bus
configurations). You can change the default value by directly editing it within the
field or by clicking Edit and updating the data source properties.
Schema:
Enter the name of the database schema that contains the tables for the system bus
data source. This field is required if you are creating a new data source with a
database that supports schema names.
Each messaging engine stores its resources, such as tables, in a single schema. Each
database schema is used by one messaging engine only. Although every messaging
engine uses the same table names, its relationship with the schema gives each
messaging engine exclusive use of its own tables.
Note:
Databases that support schemas often have different requirements for specifying
the schema names. Refer to your database documentation and the database
configuration topics in the IBM Business Process Manager Information Center for
more information about creating and using schemas with messaging engines.
If you have not yet configured a data source, Schema contains a default value
based on the detected database configuration. This default value is unique among
all data sources that use the specified database instance in the same cell. You can
change the default value by directly editing it within the field or by clicking Edit
and updating the data source properties.
Select this check box if you want the messaging engine to create the database
tables for the data sources.
This check box is optional. If you do not select it, the database administrator must
create the tables manually.
User name:
Enter the user ID used to connect to the system bus data source.
If you have not yet configured a data source, User name contains a default value
based on the detected database configuration. You can change the default value by
directly editing it within the field or by clicking Edit and updating the data source
properties.
Password:
Enter the password for the user specified in the User name field.
If you have not yet configured a data source, Password contains a default value
based on the detected database configuration. You can change the default value by
directly editing it within the field or by clicking Edit and updating the data source
properties.
Server:
Specify the name of the database server used by the system bus.
If you have not yet configured a data source, Server contains a default value based
on the detected database configuration. You can change the default value by
directly editing it within the field or by clicking Edit and updating the data source
properties.
Provider:
Specify the database provider type used to create the messaging resources for the
system bus. The Java Database Connectivity (JDBC) provider you select determines
the type of database you can use for the data source.
If you have not yet configured a data source, Provider contains a default value
based on the detected database configuration. You can change the default value by
directly editing it within the field or by clicking Edit and updating the data source
properties.
If you are using a file store for your messaging engine, Provider is automatically
set to File Store. The file store option is available only during installation of a
stand-alone server profile.
Specifies the properties of the data source used by the SCA application bus. This
panel is active whenever you are creating a new SCA configuration for the cluster
or server and when you are editing the application bus data source properties for a
previous SCA configuration.
Many fields in this panel contain default values based on the IBM Business Process
Manager and WebSphere Enterprise Service Bus Common database (by default,
WPRCSDB) configured on the deployment target you have selected. You can accept
these default values or edit them.
In addition to the fields, the Application Bus Member panel contains two buttons:
v Edit: Use the Edit button to modify the data source used for the application bus.
Click this button to open the data source configuration page.
v Test Connection: Use the Test Connection button to verify that the data source
can contact and authenticate with the database. If a component manages the
data source, this test also verifies whether the data can be reached from the
configured scope and, if applicable, whether the schema is configured correctly.
Refer to the following sections for detailed descriptions of each field in this panel.
Note that you cannot edit these fields if you are using a stand-alone server profile.
Select this check box to configure SCA application bus support for WebSphere
Business Integration Adapters.
By default, this option is selected when you are creating a new SCA configuration.
It is required if you plan to deploy SCA applications that use WebSphere Business
Integration Adapters to your chosen deployment target.
Note: If you are not sure whether you plan to deploy these types of SCA
applications, consider selecting this option simply to ensure the support is
available if needed.
Database Instance:
Specifies the database instance used for this data source. The value must be the
name of an existing database instance (for example, WPRCSDB).
If you have not yet configured a data source, Database Instance contains a default
value based on the detected database configuration. You can change the default
value by directly editing it within the field or by clicking Edit and updating the
data source properties.
Schema:
Enter the name of the database schema that contains the tables for the application
bus data source. This field is required if you are creating a new data source with a
database that supports schema names.
Note: Databases that support schemas often have different requirements for
specifying the schema names. Refer to your database documentation and the
database configuration topics in the IBM Business Process Manager Information
Center for more information about creating and using schemas with messaging
engines.
If you have not yet configured a data source, Schema contains a default value
based on the detected database configuration. This default value is unique among
all data sources that use the specified database instance in the same cell. You can
change the default value by directly editing it within the field or by clicking Edit
and updating the data source properties.
Create tables:
Select this check box if you want the messaging engine to create the database
tables for the data sources.
This check box is optional. If you do not select it, the database administrator must
create the tables manually.
User name:
Enter the user ID used to connect to the application bus data source.
If you have not yet configured a data source, User name contains a default value
based on the detected database configuration. You can change the default value by
directly editing it within the field or by clicking Edit and updating the data source
properties.
Password:
Enter the password for the user specified in the User name field.
If you have not yet configured a data source, Password contains a default value
based on the detected database configuration. You can change the default value by
directly editing it within the field or by clicking Edit and updating the data source
properties.
Server:
Specify the name of the database server used by the application bus.
Provider:
Specify the database provider type used to create the messaging resources for the
application bus. The JDBC provider you select determines the type of database you
can use for the data source.
If you have not yet configured a data source, Provider contains a default value
based on the detected database configuration. You can change the default value by
directly editing it within the field or by clicking Edit and updating the data source
properties.
If you are using a file store for your messaging engine, Provider is automatically
set to File Store. The file store option is available only during installation of a
stand-alone server profile.
This field is required to create a new service integration bus data source for the
cluster or server. You cannot change this value once the application bus messaging
engine is configured.
Enable the service component architecture: Use the Enable the service component
architecture check box to specify that SCA applications can be deployed to the
current cluster or server.
Use a remote destination location: Select this radio button to specify that you want to
host SCA applications on the local cluster or server while hosting the required
messaging engines and destinations on a remote cluster or server.
If you select Use a remote destination location, use the associated drop-down list
or New button to specify the remote location you want to use. The drop-down list
shows all deployment targets that are configured as members of the SCA system
bus, and the New button opens the Browse Deployment Target page so you can
add a single server or cluster target to that list.
Note: If you use the Browse Deployment Target page to add a new target to the
list and navigate away from the Service Component Architecture page before
completing your SCA configuration, that target is removed from the list.
Configure a destination location locally: Select this radio button to specify that you
want to host SCA applications and their required messaging engines and
destinations on the current cluster or server. When you select the Configure a
destination location locally radio button, the System Bus Member and Application
Bus Member panels are available; they contain additional fields needed to
complete the configuration.
Specifies the properties of the data source used by the SCA system bus. This panel
is active whenever you are creating a new SCA configuration for the cluster or
server.
This field is optional and is cleared by default when you are creating a new SCA
configuration.
Data Source: Specify the name of the data source you want to use for the server or
cluster. You can use the drop-down menu to select an existing data source or click
New to define a new data source.
If you want to make changes to the selected data source before completing your
SCA configuration, click the Edit button next to the field to access the data source
configuration page.
This field is available only if you are creating a new SCA configuration and have
not selected Use Default Data Store.
Schema: Use the Schema field to specify the name of the database schema that
contains the tables for the system bus data source.
Each messaging engine stores its resources, such as tables, in a single schema. Each
database schema is used by one messaging engine only. Although every messaging
engine uses the same table names, its relationship with the schema gives each
messaging engine exclusive use of its own tables.
Note: Databases that support schemas often have different requirements for
specifying the schema names. Refer to your database documentation and the
database configuration topics in the WebSphere Process Server Information Center
for more information about creating and using schemas with messaging engines.
This field is available only if you are creating a new SCA configuration and have
not selected Use Default Data Store. It is required if you are creating or using a
data source with a database that supports schema names.
User name: Use the User name field to specify the ID used to connect to the
system bus data source.
This field is available only if you are creating a new SCA configuration and have
not selected Use Default Data Store. It is required.
Password: Use the Password field to enter the password for the user specified in
the User name field above.
This field is available only if you are creating a new SCA configuration and have
not selected Use Default Data Store. It is required.
This field is available only if you are creating a new SCA configuration and have
not selected Use Default Data Store. It is required.
Create tables: Use the Create tables check box to specify that the messaging engine
must create the database tables for the data sources. If this option is not selected,
the administrator must create the database tables manually.
Specifies the properties of the data source used by the SCA application bus. This
panel is active whenever you are creating a new SCA configuration for the cluster
or server and when you are adding a new application bus configuration to an
existing SCA configuration.
Enable the WebSphere Business Integration Adapter components: Select this check box
to specify that you want to configure the SCA application bus with support for
WebSphere Business Integration Adapters.
By default, this option is selected when you are creating a new SCA configuration.
It is required if you plan to deploy SCA applications that use WebSphere Business
Integration Adapters to your chosen deployment target.
Note: If you are not sure whether you plan to deploy these types of SCA
applications, consider selecting this option simply to ensure the support is
available if needed.
Use Default Data Store: Select this checkbox if you want to use the default data
source for the server or cluster. The default data source uses the embedded
database on the local file system.
Data Source: Specify the name of the data source you want to use for the server or
cluster. You can use the drop-down menu to select an existing data source or click
New to define a new data source.
If you want to make changes to the selected data source before completing your
SCA configuration, click the Edit button next to the field to access the data source
configuration page.
This field is available only if you are creating a new configuration and have not
selected Use Default Data Store.
Schema: Use the Schema field to specify the name of the database schema that
contains the tables for the application bus data source.
Each messaging engine stores its resources, such as tables, in a single schema. Each
database schema is used by one messaging engine only. Although every messaging
engine uses the same table names, its relationship with the schema gives each
messaging engine exclusive use of its own tables.
Note: Databases that support schemas often have different requirements for
specifying the schema names. Refer to your database documentation and the
database configuration topics in the WebSphere Process Server Information Center
for more information about creating and using schemas with messaging engines.
This field is available only if you are creating a new configuration and have not
selected Use Default Data Store. It is required if you are creating or using a data
source with a database that supports schema names.
This field is available only if you are creating a new configuration and have not
selected Use Default Data Store. It is required.
Password: Use the Password field to enter the password for the user specified in
the User name field above.
This field is available only if you are creating a new configuration and have not
selected Use Default Data Store. It is required.
This field is available only if you are creating a new SCA configuration and have
not selected Use Default Data Store. It is required.
Create tables: Use the Create tables check box to specify that the messaging engine
must create the database tables for the data sources. If this option is not selected,
the administrator must create the database tables manually.
This field is available only if you are creating a new configuration and have not
selected Use Default Data Store. It is optional.
The service integration bus browser has two panes. The first pane (referred to in
the help as the tree pane) presents a navigation tree where you can browse the
service integration buses configured on the system. The second pane (referred to in
the help as the content pane) contains the collection and detail pages for the buses
and their individual components, such as messaging engines, queue points,
destinations, publication points, and mediation points. When you click an item in
the navigation tree pane, its corresponding collection or detail page opens in the
content pane.
Buses:
Use this page to view the configuration properties of a service integration bus.
Service integration buses support applications using message-based and
service-oriented architectures. A bus is a group of interconnected servers and
clusters, called bus members. Applications connect to a bus at one of the messaging
engines associated with its bus members.
To view this administrative console page, click Service integration > Service
Integration Bus Browser > [Tree Pane] bus_name.
All fields on this page are read only. To edit the properties of a bus, use the
version of this page available by clicking Service Integration > Service Integration
Bus Browser > [Content Pane] bus_name.
Name: The name of the service integration bus. Each bus has a unique name.
Inter-engine transport chain: The transport chain used for communication between
the messaging engines in this bus. The default value is InboundBasicMessaging.
The transport chain corresponds to one of the transport chains defined in the
server's messaging engine inbound transports setting. All servers automatically
have a number of transport chains defined to them; it is also possible to create new
transport chains.
Discard messages: The option to retain messages on a deleted message point. If this
option is selected, messages are retained at a system exception destination;
otherwise, they are discarded.
Configuration reload enabled: The option to enable certain changes to the bus
configuration to be applied without having to restart the messaging engines.
High message threshold: The threshold above which the messaging system takes
action to limit the addition of more messages to a message point.
When a messaging engine is created on the bus, the value of this property is used
to set the default high message threshold for the messaging engine.
Destinations:
Use this page to view and administer destinations. A bus destination is defined as
part of a service integration bus and is hosted by one or more locations within the
bus. Applications can attach to the destination as producers, consumers, or both to
exchange messages.
To view this administrative console page, click Service integration > Service
Integration Bus Browser > [Tree Pane] bus_name. Expand the tree view and then
click [Tree Pane] Destinations
This page lists all destinations defined for a service integration bus, organized into
the following columns:
Identifier
The unique ID assigned to the destination.
Bus The name of the service integration bus that hosts the destination.
Type The destination's type (for example, queue or topic space).
Description
An optional description of the destination.
Mediation
The name of the mediation that mediates this destination. Note that
destinations do not have to be mediated.
You can browse or change the properties of a destination. Click its name from the
Identifier column and the Destination detail page opens, displaying information
about the destination. You can also create, delete, mediate, or unmediate a
destination by selecting the check box next to destination and clicking the
appropriate button.
Messaging engine:
All fields on this page are read only. To edit the properties of a messaging engine,
use the version of this page available by clicking Service Integration > Service
Integration Bus Browser > [content pane] bus_name > [content pane] Messaging
engines > [content pane] engine_name.
UUID: The universal unique identifier assigned by the system to this messaging
engine.
Message store type: The type of message store used: file store or data store. Once
the messaging engine has been created, the type cannot be changed.
High message threshold: The maximum number of messages the messaging engine
can place on its message points.
When the messaging engine is created, the high message threshold of the bus is
used to set the default value of this property for the messaging engine itself. In
Target groups: A list of target groups with which the messaging engine can
register.
Custom target groups are a type of target group used by JMS connection factories.
When an application creates a connection to a service integration bus, it uses
connection factory properties to specify suitable messaging engines to connect to.
When a target type of Custom is specified in the connection factory targetType
property, the application connects to one of the messaging engines in the specified
target group. A particular messaging engine is selected from the group according
to the other connection factory properties that have been specified.
Bus name: The name of the service integration bus on which this messaging
engine is configured.
Bus UUID: The universal unique identifier of the service integration bus on which
this messaging engine is configured.
Use this page to view the message points for queues, for point-to-point messaging.
This page lists all queue points for the selected engine, organized into the
following columns:
Queue depth
The number of messages on the message point. Click the value in this
column to navigate to the Messages page to view the messages. If the
messaging engine has stopped, queue depth information is unavailable and
the column displays a warning icon.
Identifier
The unique ID assigned to a queue point. Click the identifier for a queue
point to navigate to the Queue Point detail page to view its properties.
UUID: The universal unique identifier assigned by the system to this message
point.
Destination type: The type of destination for this message point. The valid values
are Queue and Topic space.
High message threshold: The threshold above which the messaging system prevents
new messages from being added to this messaging point.
Send allowed: When selected, puts messages onto this message point.
Target UUID: The universal unique identifier of the bus destination to which this
message point belongs.
This page is divided into two sections. The first section contains a collection table
that lists each message on the message point. The second part contains Runtime
and Message Body tabs that are used to display a selected message and its runtime
properties.
Note that this page provides a snapshot of the set of messages queued on the
message point. Because it is only a snapshot, a listed messages may no longer exist
when you attempt to view it or modify its properties.
Runtime tab: The Runtime tab displays all the runtime properties associated with a
message. You cannot edit these properties.
Identifier: The unique identifier for the message, assigned by the system.
State: The current state of the message as it relates to the transaction the message
belongs to.
Transaction ID: The local transaction identifier of the transaction that this message
is currently part of. You cannot edit this field.
Time stamp: The date and time associated with the message.
Message wait time: The mount of time, in milliseconds, that the message has been
waiting to be consumed.
Current messaging engine arrival time: The time that the message arrived on the
current messaging engine.
Redelivered count: The number of times the message has been redelivered.
Exception destination timestamp: The date and time at which the message was put
to the exception destination.
Exception destination reason: The reason the message was put to the exception
destination.
If the message body type is XML, the tab defaults to the XML view; use the
Expand All and Collapse All buttons in this view to manage the amount of
content displayed on the page.
If the message body type is not XML or if the displayed message body size is less
than the actual message body size, the tab defaults to the hex view.
Approximate total message size: Specifies the approximate size, in bytes, of the
current message in the hex view. You cannot edit this field.
Displayed message body size: Specifies the size, in bytes, of the message body
displayed in the hex view of this tab. The default value is 65535.
You can change the displayed message body size by entering a new value in the
field and clicking Apply. The field accepts an integer between 1 and 2147483647.
The new value persists for the session or until you click Reset to return to the
default value.
To view this administrative console page, click Applications > SCA Modules >
[Content pane] module_name > SCA system bus destinations
The panel displays a list of the system bus destinations that belong to the selected
SCA module, with the identifier, bus name, description, type, and mediation for
each destination.
In a deployment environment, the server runs on a support cluster, while the agent
runs in the application cluster on the server where you deployed your module. In
a stand-alone server environment, the server and agent both run on the
stand-alone server.
Use the Service Monitor Agent page to configure the service monitor agent so it
can monitor response time and throughput on selected operations and send the
data to its service monitor server.
To view this page in the administrative console, click Servers > Server Types >
WebSphere application servers > servername > Service Monitor > Service
Monitor Agent.
Enables or disables the service monitor agent. The agent is enabled by default
when you configure a stand-alone server or deployment environment profile. It is
disabled by default for any new servers you create in the console.
If you need to disable the agent for maintenance or other reasons, the service
monitor stops and the agent does not send data to the server.
Enables or disables the automatic switch off for a service monitor point. When this
option is enabled, the monitor point remains active for the specified number of
seconds after receiving the last request. If no more requests are made, the monitor
point switches off to preserve system performance.
By default, this option is enabled and set to 120 seconds. Clear the check box to
disable the automatic switch off.
Specifies the amount of time, in seconds, that can elapse between monitor data
transmissions.
By default, the data transmission interval is five (5) seconds. Reduce the interval if
you have a high volume of monitor data and find you are missing measurements.
Note: Reducing the interval is not recommended if your system experiences heavy
load.
Specifies the monitor data buffer size, in kilobytes. The buffer stores the monitored
Service Component Architecture (SCA) events before they are analyzed and sent to
the service monitor server.
By default, the data buffer size is 512 KB. If your system experiences heavy load,
consider increasing the buffer size. Note that if there is no memory available, data
that exceeds the buffer is discarded.
Service Monitor:
Use the Service Monitor page to configure the service monitor server so that it can
gather and aggregate response time and throughput measurements from all
running service monitor agents. The server then calculates and stores the statistics
so you can access them in the Service Monitor widget.
To view this page in the administrative console, click Servers > Server Types >
WebSphere application servers > servername > Service Monitor > Service
Monitor.
The service monitor server must be enabled before you can use the Service
Monitor widget. In stand-alone server environments, the service monitor server is
enabled by default during profile creation. In deployment environments and for
new servers created using the administrative console, you must enable the service
monitor server manually from the administrative console.(To enable or disable the
Service Monitor in the administrative console, click Servers > Server Types >
WebSphere application servers > servername; under Business Integration, expand
the Service Monitor list, and then click Service Monitor.) For Remote Message
When the service monitor server is disabled, data is not collected from agents and
provided to clients. To disable the service monitor server, clear the Enable service
monitor check box.
Specifies, in megabytes (MB), the in-memory data store size for service monitoring
measurements. When the buffer reaches the specified size, the oldest entries are
deleted and replaced with new data.
By default, the data store size is 5 MB. Set the value to 0 (zero) to prevent the
server from storing measurements.
By default, the query limit is 50,000 measurements. If you want to return all
matched measurements, clear the check box.
Indicates that you want to collect response time and throughput data from all
running service monitor agents in the scope defined for the service monitor server
(by default, the cell). Clear this option if you want to monitor only a subset of
service monitor agents.
If you clear Enable all service monitor agents, a collection table appears below it.
For a new configuration, the table is empty and you must use the Add button to
access the Browse Deployment Targets page and add additional agents (by default,
each deployment target has a configured service monitor agent) to the table.
All agents listed in the table are monitored. If you want to stop monitoring a
particular agent, select it from the table and click Delete.
To view this administrative console page, click Servers > Server Types >
WebSphere application servers > serverName > WebSphere Business Integration
Adapter Service.
From this page, you can click the links under Additional Properties to go to the
related pages:
v Click Custom Properties to specify name and value pairs of data. Use this page
to set internal system configuration properties.
v Click Manage the WebSphere Business Integration Adapter resources to reach
the administrative console page that allows you to send administrative
commands to eligible WebSphere Business Integration Adapter resources.
Select Enable service at server startup to have the service started automatically
when the server starts.
To view this administrative console page, click Servers > Server Types >
WebSphere application servers > serverName > WebSphere Business Integration
Adapter Service > Manage the WebSphere Business Integration Adapter
resources.
Use the command buttons to act on the WebSphere Business Integration Adapter
resources. Click a resource to select it from the list, and then click a command
button to submit that command for the resource. When you select a resource, a
detail page describing the configured fields appears. The resource detail page
displays in read-only mode.
Activate:
The Activate command changes the state of the connector from inactive to active.
In the active state, the connector performs both event delivery and request
processing.
Deactivate:
The Deactivate command changes the state of the connector from active to
inactive. In the inactive state, the connector process continues to run; however, the
connector no longer accepts new requests.
Suspend:
Resume:
The Resume command changes the state of the connector from suspended to
active. When it returns to the active state, the connector resumes event delivery.
Shut down:
The Shut down command shuts down the connector process. You cannot restart
the connector process from the administrative console. Instead, you must restart
the connector process with startup scripts installed with the WebSphere Business
Integration Adapter product.
Name:
Property Value
Data type String
Status:
Property Value
Data type String
Description:
The property value is a free-form text string, which describes the WebSphere
Business Integration Adapter resource and its purpose.
Property Value
Data type String
Specifies a list of JNDI names, which are used to create connections to the
associated WebSphere Business Integration Adapters JMS destinations.
Property Value
Data type Pick-list
Property Value
Data type Integer
Units Seconds
Default 30
Use this page to specify settings for a WebSphere Business Integration Adapter
resource.
To view this administrative console page, click Resources > WebSphere Business
Integration Adapters > New.
Scope:
Specifies the level and identity of the scope to which this WebSphere Business
Integration Adapter resource is visible (the cell, node, or server level).
Property Value
Data type String
Name:
The value is a string with no spaces, intended to be a meaningful text identifier for
the WebSphere Business Integration Adapter resource.
Property Value
Data type String
Description:
Property Value
Data type String
Category:
Property Value
Data type String
Specifies a list of JNDI names, which are used to create connections to the
associated WebSphere Business Integration Adapters JMS destinations.
Property Value
Data type Pick-list
Specifies a list of JNDI names for the queues that the server uses to send
administrative commands to the WebSphere Business Integration Adapter.
Property Value
Data type Pick-list
Specifies a list of JNDI names for the queues that the server uses to receive
administrative messages from the WebSphere Business Integration Adapter.
Property Value
Data type Pick-list
Message timeout:
Property Value
Data type Integer
Units Seconds
Default 30
To view this administrative console page, click Resources > WebSphere Business
Integration Adapters.
Scope:
Specifies the level and identity of the scope to which this WebSphere Business
Integration Adapter resource is visible (cell, node, or server level).
Property Value
Data type String
Cell:
The most general scope. Resources defined at the Cell scope are visible from all
nodes and servers, unless they have been overridden.
To view resources defined in the cell scope, do not specify a server or a node name
in the scope selection form.
Property Value
Data type String
Node:
The default scope for most WebSphere Business Integration Adapter resources.
Resources defined at the Node scope override any duplicates defined at the Cell
scope and are visible to all servers on the same node, unless they have been
overridden at a server scope on that node.
Property Value
Data type String
Server:
The most specific scope for defining WebSphere Business Integration Adapter
resources. Resources defined at the Server scope override any duplicate resource
definitions defined at the Cell scope or parent Node scope and are visible only to a
specific server.
Property Value
Data type String
Name:
The value is a string with no spaces, intended to be a meaningful text identifier for
the WebSphere Business Integration Adapter resource.
Description:
The value is a free-form text string to describe the WebSphere Business Integration
Adapter resource and its purpose.
Category:
Specifies a list of JNDI names, which are used to create connections to the
associated WebSphere Business Integration Adapters JMS destinations.
Use this page to specify settings for a WebSphere Business Integration Adapter
resource.
To view this administrative console page, click Resources > WebSphere Business
Integration Adapters > New.
Scope:
Specifies the level and identity of the scope to which this WebSphere Business
Integration Adapter resource is visible - the cell, node, or server level.
Property Value
Data type String
Name:
A string with no spaces meant to be a meaningful text identifier for the WebSphere
Business Integration Adapter resource.
Property Value
Data type String
Description:
Property Value
Data type String
Category:
Property Value
Data type String
Specifies a list of JNDI names, which are used to create connections to the
associated WebSphere Business Integration Adapters JMS destinations.
Property Value
Data type Pick-list
Specifies a list of JNDI names for the queues WebSphere Application Server uses to
send administrative commands to the WebSphere Business Integration Adapter.
Property Value
Data type Pick-list
Specifies a list of JNDI names for the queues WebSphere Application Server uses to
receive administrative messages from the WebSphere Business Adapter.
Property Value
Data type Pick-list
Message timeout:
Property Value
Data type Integer
Units Seconds
Default 30
Application Scheduler
The Application Scheduler allows an administrator to create and administer a
schedule for starting or stopping IBM Business Process Manager applications. It is
available from the administrative console. Application Scheduler entries can be
created for any installed Business Process Manager application.
Use this page to view existing events, create new events, and delete events.
To view this administrative console page, click Servers > Server Types
>WebSphere application servers [Content pane] server name > [Business
Integration] Application Scheduler.
This page displays the following information about each scheduled event:
v Schedule Entry ID: The name of the scheduled event. (This is automatically
assigned after you have finished creating the event.)
v Group Application: The name of the application associated with the event.
v Status: The current status of the event (scheduled, suspended, completed,
running, canceled or invalid).
v Initial Date: The date and time the event was initially fired.
v Action: The current action associated with the event; it specifies whether the
event has started or stopped firing.
v Next Fire Time: The date and time the event will fire next.
Application Scheduler settings: Use this page to create new events or modify
existing events.
Group Application:
The Group Application field specifies the application associated with the
scheduled event.
Select the application for which to schedule an event from the list of applications.
Status:
Status Description
Scheduled A scheduled event will fire at a predetermined date, time
and interval. Each subsequent firing time is calculated.
Suspended A scheduled event is suspended and will not fire until its
status is changed to scheduled.
Complete A scheduled event is completed.
Running A scheduled event is in the midst of firing.
Note: This status is rarely seen. It monitors the event for
the very short duration that the event is firing.
Canceled A scheduled event has been canceled. The task will not
fire and it cannot be resumed. It can be purged.
Invalid A scheduled event is invalid. A task can be invalid if it
has been purged or if the information used to query for
that task is not valid.
Note: Selecting this status results in an error message.
Initial Date:
The Initial Date field specifies the date that the event is initially fired.
The date must be in the format mmm dd, yyyy, where mmm specifies the first three
letters of the month. An example of a valid value is Apr 21, 2005. The value for
this field must be a date and time formatted for your locale. The field will be
populated by default with a date that is formatted in the locale of the client, which
may be different than the locale of the machine or server.
Property Value
Required Yes
Initial Time:
The Initial Time field specifies the time that the event is initially fired.
Time is based on a 12-hour clock and must be in the format hh:mm:ss meridiem
time_zone, where meridiem specifies either AM or PM and time_zone specifies the
three-letter abbreviation for the time zone. An example of a valid value is 10:56:11
AM CDT. The value for this field must be a date and time formatted for your locale.
The field will be populated by default with a date that is formatted in the locale of
the client, which may be different than the locale of the machine or server.
Property Value
Editable No
The Next Fire Time field specifies the time that a created or modified event is
scheduled to fire.
This field is a concatenation of the Initial Date and Initial Time fields and appears
only after you click OK or Apply. This field can not be edited.
Property Value
Editable No
Required Yes
Action:
Use the Action field to specify whether the application is going to be stopped or
started by selecting the desired event from the list.
Property Value
Required Yes
Recurrence:
If you create a recurring event, use the following parameters to specify when it
recurs. This field is required for recurring events, but not for other events.
Parameter Description
Start-by-period This parameter has two fields: Value and Unit. This
parameter shows the recurrence unit for the event, in
minutes, hours, days, months or years (for example, 10
minutes). Together these fields determine the window of
time that the Application Scheduler will attempt to fire
an event if it is unable to fire at its scheduled time.
Selecting this check box sets the recurrence of an event
within a certain time period. For example, if the
Application Scheduler resumes operation and is able to
fire within the end time specified by Start-by-period it
will fire the event. Otherwise, the event will not fire until
its next scheduled fire time.
Help files for business integration console panels for Process Server and
WebSphere Enterprise Service Bus are provided in the WebSphere Business Process
Management information center, under Reference > Administrative console help,
at http://www14.software.ibm.com/webapp/wsbroker/redirect?version=wbpm700
&product=wesb-dist&topic=welc_ref_adm_help.
Note: This set of help files is intended as a common reference lookup, rather than
as a specific definition for one or the other product. Some business integration
console panels are only provided in one or another product.
Help files for other console panels provided by the underlying WebSphere
Application Server are available as topics in the WebSphere Application Server
information center, at http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/
topic/com.ibm.websphere.nd.multiplatform.doc/info/ae/ae/
welc_ref_adm_help.html.
Partially stopped The entity is partially stopped. Less than all of the entities involved have stopped.
Partially started The entity is partially started. Less than all of the entities involved have started.
The function status is typically used for clusters that perform a given function. A
cluster is an example of a redundant entity state where its cluster members make
up the redundant parts. A deployment environment status is an example of a
minimum entity state where all functions need to be available for the deployment
environment to be available.
Table 82. Aggregated state of entities
Icon Status Minimum entities state Redundant entities state
Unknown At least one of the minimum entities state is The configuration did not complete for the
unknown, making the entire state unknown. deployment environment.
Unavailable At least one of the minimum entities is All of the entities in the deployment
unavailable. environment are unavailable.
Stopped All entities are stopped. All minimum entities are stopped. If some
entities are not stopped, this indicates
those entities have problems.
Partially There is at least one partially stopped entity At least one partially stopped or stopped
stopped and any number of stopped entities. entity and any number of unavailable
entities.
Running All entities are running. All minimum entities are running. If some
entities are not running, this indicates that
those entities have problems.
Partially There is at least one running entity and any At least one partially running or running
started number of stopped, partially stopped, or and any number of stopped, partially
partially running entities. stopped or unavailable entities.
Partially stopped The deployment environment is available but at least one function is
stopped or partially stopped.
Partially running The deployment environment is available but at least one function is
partially running.
Running The deployment environment is available and all functions are running.
Web services
Note, however, that creating a single remote messaging engine also creates a single
point of failure for incoming adapter input. This can be counteracted by protecting
the messaging engine using an external Operating System High Availability (HA)
management software package, such as HACMP™, Veritas or Microsoft Cluster
Server.
If you are using the WebSphere ESB documentation plug-ins in your own local
information center, you can also add the Application Server documentation
plug-ins into the same information center. You can download the documentation
plug-ins for WebSphere Application Server Version 7 from the library page. For
example, IBM WebSphere Help System plug-ins: WebSphere Application Server
Network Deployment (All operating systems).
X
O xsd 4
oracle 325
P
policy 9, 43, 48, 51, 71, 139, 148, 152,
157, 207
port 71, 207, 235, 237, 252, 254, 293, 325,
362, 458
profile 190, 287, 289, 293, 301, 323, 325,
359, 385, 458, 463
R
registry lookup 71
S
sample 60, 61, 62, 325, 360
schema 16, 26, 122, 131, 379, 380, 382,
383, 455, 456
security 231, 233, 248, 250
service application 476
service component 4, 306, 360
service component architecture 4, 306,
360
service endpoint 71, 207
service message object 9, 13, 16, 26, 29,
31, 33, 37, 39, 40
servicedeploy 287, 355
shortcut keys 1
static endpoint 220, 221, 223, 241
syntax diagram 283
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
IBM Corporation
Software Interoperability Coordinator, Department 49XA
3605 Highway 52 N
Rochester, MN 55901
U.S.A.
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
If you are viewing this information softcopy, the photographs and color
illustrations may not appear.
This book contains information on intended programming interfaces that allow the
customer to write programs to obtain the services of WebSphere ESB.
However, this information may also contain diagnosis, modification, and tuning
information. Diagnosis, modification and tuning information is provided to help
you debug your application software.
Trademarks
IBM, the IBM logo, ibm.com®, are trademarks of IBM Corporation, registered in
many jurisdictions worldwide. A current list of IBM trademarks is available on the
Web at “Copyright and trademark information” www.ibm.com/legal/
copytrade.shtml. Other product and service names might be trademarks of IBM or
other companies.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
Notices 663
664 IBM WebSphere ESB: Reference
Sending your comments to IBM
If you especially like or dislike anything about this book, please use one of the
methods listed below to send your comments to IBM.
Feel free to comment on what you regard as specific errors or omissions, and on
the accuracy, organization, subject matter, or completeness of this book.
Please limit your comments to the information in this book and the way in which
the information is presented.
To make comments about the functions of IBM products or systems, talk to your
IBM representative or to your IBM authorized remarketer.
When you send comments to IBM, you grant IBM a nonexclusive right to use or
distribute your comments in any way it believes appropriate, without incurring
any obligation to you.
You can send your comments to IBM in any of the following ways:
v By mail, to this address:
SC34-7237-01