MetaController Guide 1
MetaController Guide 1
Chapter
Basic Concepts
Nodes, Links and Directed Acyclic Graphs
A metaController application is assembled of a set of nodes linked together in a
directed acyclic graph (DAG). A node represents an atomic process which is executed
as a unit. The DAG defines a dependency graph, which can be expressed as: in order
to execute a node all its predecessors must be executed. The possible statuses of a
node are: NOTREACHED, READY2GO, INPROCESS, COMPLETED,
ERROR, CACHED, and PSEUDO-COMPLETED.
Nodes without a predecessor are called start nodes, and nodes without a successor
are called end nodes. The metaController initially schedules for execution the set of
start nodes since these have no predecessors.
Regular links (also called links hereafter) establish the predecessor → successor
relationship. Error links (displayed in red and dashed) are only followed when the
status of the source node is ERROR, and in this case the regular links are not
followed.
NOTREACHED is the status of a node that can not be considered for execution; its
color code is transparent. READY2GO is the status of a node that satisfies the
condition to be activated and its color code is blue. INPROCESS is the status of a
node started for execution (an external process has been spawned), and its color code
is yellow. COMPLETED is the status of a node whose execution completed
normally; its color code is green. ERROR is the status of a node whose execution
completed with an error condition; its color code is red. For a description of the
CACHED and PSEUDO-COMPLETED statuses, see the Advanced Topics
section.
Basic Usage
The metaController Client component allows a user to create process flows in
separate windows, save them as XML files, or as runnable processes in the
metaController repository. Flow designers could work on portions of a larger flow off
Creating a Node
Drag and Drop Drag a node from the metaController palette and drop it to the
active document.
Click Once Click once on a node in the palette and click again in the
document.
Double Click Double click on a node in the palette and click a number of times
in the active document to create as many nodes. To deselect the
“multiple drop” mode, click anywhere in any palette.
Creating a Link
Click Once Click once on a link in the palette and then click the source node
first and the target node second.
Double Click Double click on a link in the palette and then click a number of
times in the active document first on the source node and then
on the destination node. . To deselect the “multiple drop” mode,
click anywhere in any palette.
Selecting an Object
Click on the object. The object inspector (if open) will display all object attributes. To
select multiple nodes, hold down the Shift key and click the additional object. Or,
select all objects in a rectangle by holding down the right mouse button while
dragging it around.
To read a flow from a file, just click the Open button on the tool bar and select the
file.
When the user authentication fails or another problem occurs, an error message is
displayed at the bottom of the window. The message disappears as soon as the user
modifies any of the window fields.
Server URL The server DNS name or IP address along with the port number
separated by colon (“:”). For example, zeus:8001.
While designing and testing, one can overwrite the existing process flow (save over an
existing flow). If there is a structural change in the new flow as compared to the old
flow, metaController will delete the event and message logs. As a result, in a
production domain we do not recommend saving over an existing flow, unless there
is no structural change in the flow or the run history is not relevant.
The Select Domain & Version window contains all metaController repository
authenticated connections, for each connection all available domains, and for each
domain, all versions.
The Select Domain & Version window allows the user to select a process from the
tree and either open it or remove it from the repository. The user also has the option
to lock the flow to open such that other users cannot access it for modification
concurrently.
When none of the existing repository connections contains the desired flow due to
authentication reasons, the user can create a new connection with the right credentials
by clicking the Add Server button. This will re-populate the tree including the new
repository connection node.
Remove Deletes a flow from the repository. Disabled when the selected
tree node is not a flow.
Add Server Allows logging in as a new user and getting the domains and
versions for that user. Displays the login window and recreates
the tree.
Version Name When a version node is selected, this field contains the version
name for that node; otherwise it is blank.
Description When a user selects a version node, this field contains the version
description for that node; otherwise it is blank.
Lock Process Allows locking of the process to be opened so that other users
cannot access that process.
Once you have a flow schema in the active window and you want to save it to the
database for processing, click the Save to Repository button on the tool bar. You will
be prompted to specify a domain and version where you want the flow to be saved.
If the selected version already is opened in a client window the Open Process Action
Option window allows you to open the process in a new window or in an existing
window. Multiple instances of the same flow are distinguished by the instance
number (in square parentheses).
Refresh Window Retrieves data from the database and redraws the selected
window.
Existing windows Titles of existing windows opened for the selected process. The
selected window is redrawn from the retrieved data from the
repository.
Monitor Mode
Monitoring Execution
Once a flow saved to the database is in the active window, you can monitor its
execution by clicking the Monitor button. This will receive all status changes for the
specified flow from the server as they occur and will display them color coded.
NOTREACHED is coded as transparent, READY2GO is coded as blue,
INPROCESS is coded as yellow, COMPLETED is coded as green, ERROR is
coded as red, CACHED is coded as magenta, and INPROCESS with “execution
exceeded historical limits” warning as orange.
In monitor mode, you have the option to display the execution log window. The
execution log window displays information about the execution of processes,
including start and end time.
System Time Select the log rows generated after that date and time.
Show verbosity Severity level equal to or higher than the value selected. A user
can select values between 1 and 10.
Show Client Log When this option is checked then only the client side log is
displayed.
Show Server Log When this option is checked then only the server side log is
displayed.
Close Closes window without saving any data to the XML file.
err_handler the name of an Error Handler node that represents the start
of an error handler flow. See the Error Handling paragraph
in the Advanced Topics section.
end the time and date when the node was last COMPLETEd.
err if the status of the node is ERROR then this attribute contains a
relevant error message.
start the time and date when the node execution was started.
color the graph is colored such that all connected nodes have the same
color.
section the set of boundary nodes define sections of nodes that can not
have overlapping runs (see the paragraph on boundary nodes in
the Advanced Topics section).
Link Attributes
Each link has a set of attributes which specify particular behavior for the link.
condition_op if condition is value then one of: =, !=, <=, >=, <, >
pass_info Y/N – is used only for File nodes; a value of Y specifies that
the file name specified in this node is used in a successor
node via this link. A value of N specifies that the file
specified in this node is not used in the successor node.
When evaluating a PARAMS node, one or more (name, value) pairs are
generated. metaController parses most activity attributes and attempts to substitute
the variable construct <%=name%> with value. If multiple PARAM nodes
generate multiple parameters with the same name, the substituted value is the one
generated last. As a result, if two PARAMS nodes link to the same node and they
generate the same parameter then the substituted value is not deterministic.
In figure 8, assume node [1] generates parameters (p11, v11), (p12, v12), (q, v13);
node [2] generates (p21, v21), (p22, v22), (r, v24); and node [3] generates (p31, v31),
(p32, v32), (q, v33), (r, v34). The following (name, value) pairs are available to node
Essbase Dim Build: (p11, v11), (p12, v12), (p21, v21), (p22, v22), (p31, v31), (p32,
v32). Also, either (q, v13) or (q, v33) depending on the order in which the parameters
have been generated. Finally, (r, v34) because this value has been generated before (r,
v24). Assuming the value of attribute attrib of node Essbase Dim Build is
<%=q%>.<%=r%>, then it will be expanded at run time to either v13.v34 or
v33.v34.
Named Resources
A loose connection is built on the fly using the information in the resource. After
usage, it is destroyed. A pooled of connections is built when the metaController
server starts.
Pools of resources are used for performance reasons. Instead of creating a resource,
using it for a specific purpose and then destroying it, one can define a pool of
resources that are acquired, used, and released. The performance gain is due to the
fact that the ‘create’ and ‘destroy’ operations are usually more computationally
intensive than the ‘acquire’ and ‘release’ operations.
metaController built-in processes that use named resources have an attribute named
datasource that contains the name of the resource to be used. For example, in
the case of the Stored Procedure processor, the datasource attribute specifies the
database and credentials used to execute the stored procedure.
Data Sources
A data source is fully defined by the following set of attributes: User Id, Password,
JDBC URL, JDBC Driver, and Database Vendor. There is a built-in datasource that
points to the metaController repository. Its name is TEAMC and select attributes can
be further configured after the initial installation.
Message Sources
A message source is fully defined by the following set of attributes: JNDI User Id,
JNDI Password, JNDI Initial Context Factory, JMS Factory, JMS Queue Name, and
JMS URL.
Advanced Topics
Grouping
Grouping allows the logical gathering of processes into larger grain processes. The
goal of grouping is to decrease complexity while allowing drill down to the
component process level.
Within a group, all nodes without a predecessor are considered start nodes and all
nodes without a successor are considered end nodes. If a link connects a node and a
group, behind the scenes the node is connected to all start nodes of the group. If a
link connects a group and a node, behind the scenes all end nodes of the group are
connected to the node. If a link connects group 1 with group 2, behind the scenes all
end nodes of group 1 are connected to all start nodes of group 2. The attributes of
the generated links are the same as those of the original, group links.
To group a set of nodes, just select all the nodes you want included in the group and
click the Group button in the metaController Editions tool bar.
In order to inactivate all successors of a node, but not the node itself, in Edit Mode
modify the action_successors attribute of the node.
To resume a stopped process, in Edit Mode one can modify the hold attribute value
to N and save the process flow to the database; or in Monitor Mode modify the
action_process attribute to RELEASE.
In the above scenario, the status of the node was ERROR. It is possible that a
process just crashes and does not set the status to ERROR; in fact the status just
remains INPROCESS. For example, it is possible that a File Watch process is waiting
for a file that the user just knows will never appear. In this case the process
administrator can still adjust the status to resume the process in two steps, as
discussed below.
Finally, a process can not only be restarted from the failure node, but also from a
previous node. The condition is that none of the successor nodes (via transitive
closure) is INPROCESS.
The possible values of the attribute are COMPLETED, READY2GO, and ERROR.
The COMPLETED value will set the status of the node to completed and will thus
allow subsequent nodes to continue. Make sure the data provided by the process is
available before adjusting the status, otherwise errors could be reported in subsequent
processes. The READY2GO value will simply restart the execution of the node; it is
If the status of a node is INPROCESS but the administrator knows that the actual
status is ERROR or the node operation needs to be canceled, he can adjust the status
to ERROR by selecting action_restart to ERROR. Subsequently, he can
change the status again to READY2GO of COMPLETED.
Finally, note the restrictions associated with the recovery action: (1) the allowed
transitions are from status ERROR to READY2GO or COMPLETED, from
COMPLETED to READY2GO, and from INPROCESS to ERROR; (2) at the
time of the COMPLETED to READY2GO transition none of successors of the
recovered node must be INPROCESS; (3) the recovered node must be in the same
section with the error node; (4) the recovered node must not be a direct or indirect
successor of the error node.
Boundary nodes allow multiple runs of the same flow to overlap. The set of all
boundary nodes define sections within which only one run is active. To change a
regular node to a boundary node, just change its commit attribute to Y. Make sure
that the set of boundary nodes define the intended sections (inspect the section
run time attribute of the process nodes). A boundary node is differentiated visually by
its circular shape (as opposed to the square shape of regular nodes).
When boundary node n1 is reached for run r1, metaController examines the section
that follows n1. If there are active nodes in that section (with run number r2) then
the r1 run is blocked at node n1, and its status becomes CACHED (color code
magenta). The execution of node n1 with run number r1 will be resumed as soon as
the section with run number r2 becomes inactive.
If the next file is available while the subsequent processes are still running, without
boundary nodes one would have to wait until the subsequent processes are
completed, and then wait until the data is processed in the work tables. By defining
the move stored procedure node as a boundary node, the loading of the next file and
Figure 3 depicts the execution of example ex02, in which nodes 20, 23, and 24 are
boundary nodes and define four sections. Section 1 comprises nodes 1, 18, and 19;
section 2 comprises nodes 20, 21, and 22; section 3 comprises nodes 23, 25, 26, 29,
and 30; and section 4 comprises nodes 24, 27, 28, 30, and 31. Notice that node 30
belongs to both sections 3 and 4. As a result, when node 23 was unblocked, its
section was cleared and node 24 could no longer be unblocked, although initially both
sections 3 and 4 were candidates for clearing.
The condition attribute must evaluate to value, the condition_op must evaluate to
one of =, !=, <=, >=, <, or >, and condition_val must evaluate to a double. The
The $value$ parameter can be created either via the Parameters node or the Oracle
Procedure node (with one return value).
Subroutine Execution
Subroutine flows are sub-flows that are defined once and are invoked (concurrently)
many times from different places, similar to subroutine constructs in traditional
programming languages. Invocation of a subroutine flow is accomplished via a
SUBCALL node. The example in Figure 12 contains two subroutine flows
(FILECOLLECTION and FILEMOVE), and three SUBCALL nodes (two nodes in
the main flow - [12] Product Information and [22] Customer Information – and one
in a subroutine flow – [102] Designated 2 History).
For all records in the result set, one subroutine instance is generated and its execution
started. The value of the sub column is used to identify the subroutine flow to be
invoked, and all the other columns are passed as parameters to the invoked routine.
Assuming the above SQL statement returns two records, subroutine
FILECOLLECTION is invoked two times. Each invocation is passed one
parameter, FILE_NAME. Therefore, in any node of the FILECOLLECTION
subroutine we can use the parameter construct <%=FILE_NAME%>. In this
example, the parameter is used in node [102] (SUBCALL) in its sql attribute:
A subroutine flow is just a top level Group node (see the section of Groups above)
with a non empty sql attribute. You can thus define a subroutine flow by selecting the
nodes to be included in the subroutine flow, clicking the Group button, and entering
a SQL statement in the Group sql attribute. Unlike top level groups which are
executed as independent flows, subroutine flows are only executed via SUBCALL
constructs.
The sql attribute of a subroutine flow node is used to get additional parameters for
the execution of the routine. It is processed as the column based sql attribute of a
PARAM node. In the example in Figure 12, the sql attribute of the
FILECOLLECTION subroutine contains:
which creates six parameters and makes them available to successor nodes
(FILE_GROUP, DESIGNATED_DIR, HISTORY_DIR, SOURCE_DIR,
TARGET_FILE_NAME, TIMEOUT).
In order to automate the error handling process, metaController allows the definition
of Error Handling flows. In order for an error handling flow to be invoked in case an
error occurs while executing a node, its name must be specified as the
err_handler attribute of the node. In the example in Figure 13, the
err_handler attribute of node [12] is myErrorHandler, which is the same as
the name attribute of the error handling flow (group node). metaController allows
two error recovery strategies: (1) via return codes – see Figure 13, and (2) via recover
actions – see Figure 14.
All parameters accessible to the node reporting the error are accessible to all nodes
within the body of the error handling routine.
From a node perspective, an error handling flow, like a subroutine flow, is a top level
Group but with a non empty return attribute as opposed to non empty sql
attribute.
In the example in Figure 13, the logic is as follows: if there is an error in the node 12,
execute the myErrorHandler error handling flow which will log the error
message in a table, then set the status of node [13] to COMPLETED, and continue
with node [14]. If there is no error, continue with node [13]. This way, the main flow
(starting with node [11]) can be re-executed without human intervention and the error
condition logged.
After executing the error handling sub flow, metaController allows two continuations:
(1) assuming that the error handling sub flow has fixed the error, the main flow can
continue (this is the scenario discussed above); (2) the error handling sub flow has
only executed some actions that have not fixed the problem (e.g., has only sent e-mail
to the sys admin) and hence the main flow cannot continue. In this latter case, the
status of node [13] remains ERROR and neither node [13] or [14] are executed.
If there is at least one error link originating from the node with the error, the error
handling flow return code is not made available to the error node. Instead, when
control passes back to the error node, all error links are followed and the recovery
action specified on the link is executed. This behavior simply automates the recovery
options available to the administrator from the GUI level in Monitor mode (see the
Note also that the restrictions associated to the recovery actions still apply to the
automated recovery actions. In addition, one cannot have an error link from a node
in a section to a subsequent node, or to a node in a prior section (defined by
boundary nodes). This latter restriction is because a previous section may be active,
and therefore the flow instance being corrected may interfere with a more recent
instance.
Finally, note that if a node is the origin of an error link, then if an error occurs within
that node, the recovery action will be executed regardless whether an error handling
flow is associated to the node itself or not.
Notice that in the case of error handling via recovery actions, the value of attribute
‘err_handler_type’ is irrelevant, as the error links are always followed.
In Compressed Sampling mode, if the Compressed Window value is 0, all the events
that occur at the same time are lumped together, the status is displayed for a period of
“Display Interval” seconds, and then the next set of events is considered. Since
Oracle stores time in increments of one second, many events may be lumped
together. If the Compressed Window value is greater than 0, the execution of the
flow starts at the Start Time value input by the user, and all events that have occurred
between the Start Time and Start Time + Compression Window are lumped together
and the latest status is displayed. Next, the time is advanced to the time of the next
event after Start Time + Compression Window and the status of the flow at that time
is displayed. The playback continues in the same manner.
The interval between two consecutive status updates is the one selected in the Display
Interval box.
Playback buttons:
Step backward.
Step forward.
Reference – metaController
Automation Services
File Processes
File Watch
Description
A File Watch node is an agent process that completes when a file is detected on the
agent machine with the name and in the directory specified in its attributes. The node
could also report an error condition if the timeout interval has been exhausted and
the file has not appeared.
Attributes
agent the alias of the agent that runs this process
Example: localAgent
file the full path name of the file that is being watched
seq integer - this attribute is not used by the File Watch node itself.
Successor nodes (like File Concatenate) use the seq value to
define the order in which the files are being used (i.e.,
concatenated).
File Copy
Description
A File Copy node is an agent process that makes a copy of a predecessor file to a
different location, on the same agent machine. The location of the predecessor file is
Attributes
agent the alias of the agent that runs this process;
Example: localAgent
seq integer - This attribute is not used by the File Copy node itself.
Successor nodes (like File Concatenate) use the seq value to
define the order in which the files are being used (i.e.,
concatenated).
File Move
Description
A File Move node is an agent process that moves a predecessor file to a different
location, on the same agent machine. The location of the predecessor file is the file
attribute of a predecessor File node (Watch, Copy, Move, Concatenate, or Essbase).
If multiple File nodes precede this node, only the first is moved; the rest are ignored.
Attributes
agent the alias of the agent that runs this process;
Example: localAgent
seq integer - this attribute is not used by the File Move node itself.
Successor nodes (like File Concatenate) use the seq value to
define the order in which the files are being used (i.e.,
concatenated).
File Concatenate
Description
A File Concatenate node is an agent process that concatenates a set of predecessor
files to a different location, on the same agent machine. The location of the
predecessor files is the file attribute of the predecessor File nodes (Watch, Copy,
Move, Concatenate, or Essbase). If multiple File nodes precede this node, the order
in which they are concatenated is given by the seq attribute of the predecessor
nodes.
Example: localAgent
seq integer - this attribute is not used by this File Concatenate node
itself. Successor nodes (like other File Concatenate) use the seq
value to define the order in which the files are being used (i.e.,
concatenated).
File Delete
Description
A File Delete node is an agent process that deletes a set of predecessor files, on the
same agent machine. The predecessor files to be deleted are the predecessor File
nodes (Watch, Copy, Move, Concatenate, or Essbase), provided that they are linked
to the File Delete node via pass_info links. The file attribute of the File
Delete node can be used to specify one additional file to be deleted. It can be left
blank as well.
Attributes
agent the alias of the agent that runs this process;
Example: localAgent
file the full path name of one file to be deleted. Can be left empty.
Attributes
alias an alias for the remote server connection. See the Remote
File Operations paragraph in the Basic Concepts section.
Example: localAgent
file the full path name of the file that is being watched.
Example 1: c:\Program
Files\tea\samples\03\emp1.dat;
Example 2: /home/user1/watch2.dat
seq integer - this attribute is not used by the File Watch node
itself. Successor nodes (like File Concatenate) use the seq
value to define the order in which the files are being used
(i.e., concatenated).
Attributes
alias an alias for the remote server connection. See the Remote
File Operations paragraph in the Basic Concepts section.
Example: localAgent
Attributes
alias an alias for the remote server connection. See the Remote
File Operations paragraph in the Basic Concepts section.
Example: localAgent
seq integer - this attribute is not used by the File Move node
itself. Successor nodes (like File Concatenate) use the seq
value to define the order in which the files are being used
(i.e., concatenated).
Attributes
alias an alias for the remote server connection. See the Remote
File Operations paragraph in the Basic Concepts section.
Example: localAgent
Attributes
alias an alias for the remote server connection. See the Remote
File Operations paragraph in the Basic Concepts section.
Example: localAgent
Attributes
agent the alias of the agent that runs this process;
Example: localAgent
direct the direct keyword value on the SQL Load command line.
Optional.
uidtns uid@tns, where uid and tns are the user id and tns respectively
used for the load operation. The password required to access the
database is stored in the metaController repository using the
Admin utility. If a password is not defined for the combination
uid @ tns, the node reports a Database Configuration Error.
Example: [email protected]
Stored Procedure
Description
A Stored Procedure node is a server process that executes a SQL stored procedure,
optionally returning one numerical result. The result – if applicable - is saved as a
metaController parameter named $value$ and is available to subsequent nodes in
the flow. The $value$ parameter is implicitly used by conditional links (see the
paragraph on Conditional Links in the Advanced Topics section).
Attributes
datasource the name of the datasource for this stored procedure. See the
Data Sources paragraph in the Basic Concepts section for more
details.
Example 2: myDB2DataSource
Example 1: tlm_sam_wait_5secs(<%=number_of_seconds%>)
OLAP Processes
Essbase File
Description
An Essbase File node is an agent process that waits for an Essbase rules file (similar
to a File Watch process). It contains additional attributes that are not used by the
node itself, but by successor Dim Build nodes.
Attributes
agent the alias of the agent that runs this process
datafile not used by the Essbase File node itself. A successor Essbase
Dim Build node uses the datafile value to build a
dimension. See Essbase Dim Build process below.
Example: d:\hyperion\essbase\app\APITest\DimData\view
errorfile not used by the Essbase File node itself. A successor Essbase
Dim Build node uses the errorfile value while building a
dimension. See Essbase Dim Build process below.
Example:
d:\hyperion\essbase\app\APITest\DimData\accounts.err
essFile the full path name of the rule file that is being watched
Example:
d:\hyperion\essbase\app\APITest\APITest\acctsql.rul
seq integer - this attribute is not used by the Essbase File node itself.
Successor nodes (like Essbase Dim Build) use the seq value to
define the order in which the files are being used. See Essbase
Dim Build process below.
uidtns uid@tns - this attribute is not used by the Essbase File node
itself. A successor Essbase Dim Build node uses the uidtns
value to build a dimension. See Essbase Dim Build process
below.
Example: tea_lm05@trvsora:1521:maf21
The set of dimensions is derived from the set of predecessor Essbase File nodes. The
dimensions are built in the order established by the seq attributes of all predecessor
nodes. In a predecessor Essbase File node, either a datafile or a uidtns
attribute must be present.
The credentials used to access the Essbase server are input into metaController via
the Admin utility, and associate the agent alias with the server IP address, user id, and
password. If such an association is not present, an Essbase Configuration Error is
reported.
Attributes
agent the alias of the agent that runs this process.
essApplication the name of the Application in which the dimensions are built.
essDatabase the name of the Database in which the dimensions are built.
Essbase Import
Description
An Essbase Import node is an agent process that imports a set of files into Essbase.
The files are extracted from predecessor nodes (File Watch or Essbase File).
Attributes
agent the alias of the agent that runs this process.
Essbase Calc
Description
An Essbase Calc node is an agent process that executes an Essbase calc script.
Attributes
agent the alias of the agent that runs this process.
calcDefault Y/N - specifies whether or not the default script should be used
(CALC ALL;).
Essbase Commands
Description
An Essbase Commands node is an agent process that executes an Essbase command.
Attributes
agent the alias of the agent that runs this process.
HyperRoll Attach
Description
A HyperRoll Attach node is an agent process that executes the HyperRoll Attach
command.
Attributes
agent the alias of the agent that runs this process.
HyperRoll Command
Description
A HyperRoll Command node is an agent process that executes a HyperRoll
command.
Attributes
agent the alias of the agent that runs this process.
Set HyperRoll
Description
A Set HyperRoll node is an agent process that executes a Set Hyper command.
Attributes
agent the alias of the agent that runs this process.
Miscellaneous Processes
Time
Description
A Time node is a server process that waits for a specified time to become
COMPLETED. The attributes are similar to the Unix command cron parameters. If
all attributes are * the next scheduled event occurs at the beginning of the next
minute. All time related attributes (dayOfMonth, dayOfWeek, hour, minute, month,
year) accept lists of *’s or numbers separated by commas. The earliest time satisfying
the criteria is the scheduled time.
Attributes
allow_past Y/N - a value of N specifies that if the next scheduled date is in
the past then the result of the execution of this node is an
ERROR. Otherwise, the node status is immediately changed to
COMPLETED.
dayOfMonth * or 1-31 - a value of * does not restrict the day of month portion
of the scheduled time. An integer value of 1 specifies that the
next scheduled event should occur on the 1st of the month,
whichever that month is.
dayOfWeek * or 0-6 - a value of * does not restrict the day of week portion of
the scheduled time. An integer value of 1 specifies that the next
scheduled event should occur on Sunday, whichever day of the
month that is.
hour * or 1-24 - a value of * does not restrict the day of month portion
of the scheduled time. An integer value of 1 specifies that the
next scheduled event should occur on the 1st of the month,
whichever that month is.
minute * or 0-59 - a value of * does not restrict the minute portion of the
scheduled time. An integer value of 30 specifies that the next
scheduled event should occur on the next half hour mark.
Example: To schedule a task on the quarter hour, one needs to set the
minute value to the list 0,15,30,45.
OS Command
Description
An OS Command node is an agent process that executes an OS command (such as a
.bat file, or an executable program).
Attributes
agent the alias of the agent that runs this process.
Parameters
Description
A Parameters node is a server process that generates parameter (name, value) pairs
and makes them available to successor nodes. See the basic concepts section for a
discussion of how parameters are used. In addition to the name and sql attributes,
hard coded values for parameters can be created using the Add Property button in
the TEA Object Inspector. The name of the attribute is the parameter name and the
attribute value is the parameter value.
Attributes
datasource the name of the datasource used to execute the sql statement
below. See the Basic Concepts section for more details.
sql a sql statement that returns (1) a result set with 2 columns and as
many rows as necessary if the value of sql_type attribute is
name_value; and (2) a result set with as many columns as
necessary but one single row if the value of sql_type attribute
is column_based. In the first case the first column value is the
name of the parameter and the second column is the parameter
value. In the second case the column name is the parameter
name and the column value is the parameter value.
Example 1: SELECT par_name, par_value FROM TLM_SAM_PARAMS
WHERE test_id=14
will generate the following parameters: (agent, localAgent),
(demoroot, D:\Program Files\tea\mc\samples\14), (schema,
E-Mail
Description
An E-Mail node is a server process that sends mail to one or more recipients. All attributes can contain parameters, which
will be expanded before the mail message is sent out.
Attributes
attachments A list of files to be attached to the mail message. The files must be separated by commas
(“,”). Can be left empty.
from The e-mail address of the person on whose behalf the message will be sent.
JMS Watch
Description
A JMS Watch node is an agent process that consumes one message from a JMS
queue / topic. The JMS timestamp header value is made available to successor nodes
as the $jmsTimestamp$ parameter.
Attributes
agent The alias of the agent that runs the JMS Watch process.
jmsPool The name of the jms connection pool used to access the queue /
topic (defined via the Admin utility).
Example: mod2='1'
The possible values returned by the DataStage Run Job node are: 1 - the job
completed normally; 2 - the job completed with warnings; 3 – the run failed; 4 – the
run was stopped; 5 – the job is not runnable (not compiled); 6 – the job is not
running; and 7 – unknown return code.
Attributes
agent The alias of the agent that runs the DataStage Run Job process.
Attributes
datasource The name of the datasource used to execute the sql
statement below. See the Basic Concepts section for more
details about named resources.
Sub Call
Description
A Sub Call node initiates for execution one or more instances of one or more
SubFlow’s. See the Subroutine Execution paragraph in the Advanced Topics section.
Attributes
datasource The name of the datasource used to execute the sql statement
below. See the Basic Concepts section for more details.
ILOG JRule
Description
A JRule node is an agent process that invokes an instance of an ILOG JRules engine
and generates metaController events, which in turn are detected using Event Watch
nodes. The rule stored in the rule file returns a set of strings which will match the
node_id attributes of Event Watch nodes.
Attributes
agent The alias of the agent that runs this process.
packageName The name of the package where the rule is stored if ruletype
is Repository.
rulefile Full path name location of the ILOG rule file if ruletype is
RuleFile.
Event Watch
Description
An EventWatch process is an agent process that completes when an event for
node_id is detected on the agent. The node could also report an error condition if the
timeout interval has been exhausted and the event for node_id has not occured.
Attributes
agent The alias of the agent that runs this process.
StageDirector
Description
A StageDirector node is a server process that sends a SOAP message to a
StageDirector application, and waits until the StageDirector application completes,
when it reports completion itself.
Attributes
application The StageDirector application to be invoked.
Example:
https://localhost:9001/cr/review.wsdl
The database identified by the datasource attribute must have a procedure named
tlm_sam_wait_5secs with one numeric input as argument.
Example 06 - JMS
Example 06 tests the JMSWATCH node.
You need to have WebLogic 6.1 installed and configured as a JMS provider in order
to run this example. Create a new Connection Factory called teamcQueue with JNDI
name com.traversesystems.teaQueueConnectionFactory. Create a new File Store
called teaJMSFileStore in directory MC_HOME. Create a new JMS server called
teaJMSServer using the teaJMSFileStore. In this server, create a new JMS Queue
(Destination) called teamcQueue with JNDI name com.traversesystems.teaQueue.
Make sure to deploy the Connection Factory and the JMS Server in the proper
WebLogic domain.
Using the metaController Admin utility set up a new Messaging Pool named
TEAMC. Set the values of the following properties as indicated.
User Id system
Password your_password_here
Pool Size 10
Save the flow to the repository. Both WLS teaQueue nodes will start execution ([100]
and [200]), waiting for a message from WebLogic on queue teamcQueue, and its
color will be yellow. Run the jmsTest utility in your metaController installation to
deposit one message in the teamcQueue. You will notice that one of the
JMSWATCH nodes turns green and the execution of that flow continues. Examine
the execution log window to see the parameters created by the JMSWATCH node.
Examine table TLM_SAM_PARAMS for the records with test_id=15 to see the
exact values that the jmsTest utility put on the queue. Notice that the selector
attribute is randomly generated in the jmsTest program, resulting in only one flow
being executed at one time.
Save the flow to the repository. The MQ teamcQueue node will start execution,
waiting for a message from MQ on queue teamcQueue, and its color will be yellow.
Run the jmsTest utility in your metaController installation to deposit one message in
the teamcQueue. You will notice that the JMSWATCH node turns green and the
execution of the flow continues. Examine the execution log window to see the
parameters created by the JMSWATCH node. Examine the MQMessage.xml file in
your metaController bin directory to see the exact message that the jmsTest utility put
on the queue. Notice that the selector attribute is hard coded in the jmsTest program.
Job TestJob is executed in project meta-proj on the DataStage server using credentials
defined in the dsServer DataStage alias. The parameters passed to the job are hard
coded in the params attribute of the node. If the return code of the DataStage node is
1 or 2 (job completed with or without warnings) then node [3] is executed, otherwise
node [4] is executed (job encountered an error; see the execution log for more
details).
Based on the returned random number, one of nodes [72] and [74] is executed. When
the selected one completes, node [82] can be evaluated.
Example 31 - JRules
Example 31 exemplifies the operation of the JRule and Event Watch nodes. The
JRule node [11] exaluates the rule stored in the JRule file RuleFile.ilr and creates
events for each
result.addFileGroup(FlowName);
statement executed. In this particular case, an event is generated for each of the three
flows, FlowOne, FlowTwo, and FlowThree.
Based on the passed pid parameter, each sub routine computes additional parameters:
sub1 extracts values for parameters p11 and p12; and sub2 extracts values for
parameter p21. In turn, these parameters are passed to nodes [13] and [23]
respectively which use test procedure tlm_sam_log to log the values into a sample
repository table. You can examine table TLM_SAM_LOG for these entries.
Use the Execution Tree to visualize the called sub routine flows.
When executed, the JRule node produces two events: one for the Product
Information flow (node [11]) and one for the Customer Information flow (node
[12]). Each such Event Watch node is followed by a SubCall node which invokes two
instances of the FILECOLLECTION sub routine flows. Node [102] in the
FILECOLECTION routine is itself a SubCall node, which will call only one instance of
the FILEMOVE sub routine flow.
In order to be able to run the flow repeatedly, routine FILEMOVE actually makes a
copy of its input parameter file from the designated directory to the history directory
(instead of moving it).
Node [105] executes a stored procedure that specifies whether the file is in the
standard directory or in the alternate directory, based on the file name. If the stored
procedure returns 2 then node [106] is executed. This will overwrite the existing
SOURCE_DIR parameter with a new value. Otherwise node [106] is not executed and
node [107] will look for its file in the default directory.
When the StageDirector node is executed, it starts a new workflow instance for
StageDirector application Cube Review (cr), form main, version 1, in the
StageDirector installation. The external Cube Review application sends an e-mail
notification to a human reviewer who follows the link provided in the link attribute
to examine a particular document. The reviewer approves or rejects the document
The fact data file arrives at a pre-determined location (node [12]), gets moved to a
staging area (node [13]), gets loaded into an Oracle table (node [14]), and an ETL
stored procedure is executed on it. In parallel, an Essbase OTL template is copied
from its source location to the work directory, and an Essbase rule file is watched.
When all the three above flows complete, a dimension gets built in Essbase (node
[19]), then a HyperRoll Set node and a HyperRoll Calc node are executed in
sequence. At this step the cube is ready for a human review. A StageDirector flow is
invoked (node [22]) after which a HyperRoll Attach node is executed, which will
publish the cube.