OpenText Documentum Server CE 22.2 - Fundamentals Guide
OpenText Documentum Server CE 22.2 - Fundamentals Guide
Fundamentals Guide
EDCCS220200-GGD-EN-01
OpenText™ Documentum™ Server
Fundamentals Guide
EDCCS220200-GGD-EN-01
Rev.: 2022-June-08
This documentation has been created for OpenText™ Documentum™ Server CE 22.2.
It is also valid for subsequent software releases unless OpenText has made newer documentation available with the product,
on an OpenText website, or by any other means.
Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Support: https://support.opentext.com
For more information, visit https://www.opentext.com
One or more patents may cover this product. For more information, please visit https://www.opentext.com/patents.
Disclaimer
Every effort has been made to ensure the accuracy of the features and techniques presented in this publication. However,
Open Text Corporation and its affiliates accept no responsibility and offer no warranty whether expressed or implied, for the
accuracy of this publication.
Table of Contents
1 Overview ................................................................................... 11
1.1 Managed content ............................................................................ 11
1.1.1 Elements of the content management system ................................... 12
1.2 Process management features ......................................................... 15
1.2.1 Workflows ....................................................................................... 15
1.2.2 Lifecycles ....................................................................................... 15
1.3 Distributed services ......................................................................... 16
1.4 Additional options ............................................................................ 16
1.4.1 Trusted Content Services ................................................................ 16
1.4.2 Content Services for EMC Centera ................................................... 16
1.4.3 Content Storage Services ................................................................ 17
1.4.4 XML Store and XQuery ................................................................... 17
1.5 Documentum products requiring activation on Documentum Server ... 18
1.5.1 Retention Policy Services ................................................................ 18
1.5.2 Documentum Collaborative Services ................................................ 18
1.6 Internationalization .......................................................................... 19
1.7 Communicating with Documentum Server ........................................ 19
1.7.1 Applications .................................................................................... 19
1.7.2 Interactive utilities ........................................................................... 20
3 Caching .................................................................................... 39
3.1 Object type caching ......................................................................... 39
3.1.1 Object types with names beginning with dm, dmr, and dmi ................. 39
3.1.2 Custom object types and types with names beginning with dmc ......... 39
3.2 Repository session caches .............................................................. 40
3.3 Consistency checking ...................................................................... 40
3.3.1 Determining if a consistency check is needed ................................... 41
3.3.2 Conducting consistency checks ....................................................... 42
4.6.2 Lifecycle states and default lifecycles for object types ........................ 63
4.6.3 Component specifications ................................................................ 64
4.6.4 Default values for properties ............................................................ 64
4.6.5 Value assistance ............................................................................. 64
4.6.6 Mapping information ........................................................................ 64
Most content is stored locally on personal computers, organized arbitrarily, and only
available to a single user. This means that valuable data is subject to loss, and
projects are subject to delay when people cannot get the information they need.
The best way to protect these important assets is to move them to a centralized
content management system.
“Data model“ on page 45, provides a detailed description of the repository data
model.
A data dictionary describes each of the object types in the Documentum system. You
can create custom applications that query this information to automate processes
and enforce business rules. “Data dictionary” on page 59, gives more detail on
what information is available and how it might be used in your Documentum
implementation.
Documentum Server provides the connection to the outside world. When content is
added to the repository, Documentum Server parses the object metadata,
automatically generates additional information about the object, and puts a copy of
the content file into the file store. When stored as an object in the repository, there
are many ways that users can access and interact with the content.
“Concurrent access control” on page 124, provides more detail on access control
features of Documentum Server.
1.1.1.2 Versioning
Documentum Server maintains information about each version of a content object as
it is checked out and checked in to the repository. At any time, users can access
earlier versions of the content object to retrieve sections that have been removed or
branch to create a new content object.
A content object can belong to multiple virtual documents. When you change the
individual content object, the change appears in every virtual document that
contains that object.
You can assemble and publish all or part of a virtual document. You can integrate
the assembly and publishing services with popular commercial applications such as
Arbortext Editor. Assembly can be controlled dynamically with business rules and
data stored in the repository.
1.1.1.5 Security
Documentum Server provides security features to control access and automate
accountability.
“Security services“ on page 77, provides information on all security options. The
OpenText Documentum Server Administration and Configuration Guide contains
information on user administration and working with ACLs.
1.1.1.5.2 Accountability
Documentum Server provides auditing and tracing facilities. Auditing keeps track of
specified operations and stores a record for each in the repository. Tracing provides
a record that you can use to troubleshoot problems when they occur.
1.2.1 Workflows
The Documentum Server workflow model lets you develop process and event-
oriented applications for content management. The model supports both automatic
and ad hoc workflows.
You can define workflows for individual documents, folders containing a group of
documents, and virtual documents. A workflow definition can include simple or
complex task sequences, including sequences with dependencies. Workflow and
event notifications are automatically issued through standard electronic mail
systems, while content remains under secure server control. Workflow definitions
are stored in the repository, allowing you to start multiple workflows based on one
workflow definition.
1.2.2 Lifecycles
Many documents within an enterprise have a recognizable lifecycle. A document is
created, often through a defined process of authoring and review, and then is used
and ultimately superseded or discarded.
Documentum Server life cycle management services let you automate the stages of
document life. The stages in a lifecycle are defined in a policy object stored in the
repository. For each stage, you can define prerequisites to be met and actions to be
performed before an object can move into that particular stage.
“Lifecycles“ on page 211, describes how lifecycles are implemented. The OpenText
Documentum Server System Object Reference Guide describes the object types that
support lifecycles.
“Document retention and deletion” on page 126, and “Setting content properties and
metadata for content-addressed storage” on page 134, provide more information on
CSEC. The OpenText Documentum Server Administration and Configuration Guide
contains information on content-addressed storage areas.
The CSS license also enables the content compression and content duplication
checking and prevention features. Content compression is an optional configuration
choice for file store and content-addressed storage areas. Content duplication
checking and compression is an optional configuration choice for file store storage
areas.
The OpenText Documentum Platform and Platform Extensions Installation Guide and
OpenText Documentum XML Store Administration Guide contain more information
about XML Store and XQuery.
Retention Policy Services (RPS) automates the retention and disposition of content in
compliance with regulations, legal requirements, and best practice guidelines.
The product allows you to manage a content retention in the repository through a
retention policy: a defined set of phases, with a formal disposition phase at the end.
You access RPS through Documentum Administrator.
RPS policies are created and managed using Retention Policy Services
Administrator, an administration tool that is similar to, but separate from,
Documentum Administrator.
“Document retention and deletion” on page 126, describes the various ways to
implement document retention, including retention policies, and how those policies
affect behaviors. “Virtual documents and retention policies” on page 156, describes
how applying a retention policy to a virtual document affects that document. The
OpenText Documentum Records Client Administration and User Guide contains complete
information about using Retention Policy Services Administrator.
1.6 Internationalization
Internationalization refers to the ability of Documentum Server to handle
communications and data transfer between itself and various client applications
independent of the character encoding they use.
Documentum Server runs internally with the UTF-8 encoding of Unicode. The
Unicode standard provides a unique number to identify every letter, number,
symbol, and character in every language.
The Unicode Consortium website contains more information about Unicode, UTF-8,
and national character sets. “Internationalization summary“ on page 239, contains a
summary of Documentum Server internationalization requirements.
1.7.1 Applications
The Documentum system provides web-based and desktop client applications.
You can also write your own custom applications. Documentum Server supports all
the Documentum Application Programming Interfaces (APIs). The primary API is
the Documentum Foundation Classes (DFC). This API is a set of Java classes and
interfaces that provides full access to Documentum Server features. Applications
written in Java, Visual Basic (through OLE COM), C++ (through OLE COM), and
Docbasic can use the DFC. Docbasic is the proprietary programming language
Documentum Server uses.
The IDQL interactive utility in Documentum Administrator lets you execute DQL
statements directly. The utility is primarily useful as a testing arena for statements
that you want to add to an application. It is also useful when you want to execute a
quick ad hoc query against the repository.
Users or applications can have multiple sessions open at the same time with one or
more repositories. The number of sessions that can be established for a given user or
application is controlled by the dfc.session.max_count entry in the dfc.properties
file. The value of this entry is set to 1000 by default and can be reset.
For a web application, all sessions started by the application are counted towards the
maximum.
In DFC, sessions are objects that implement a session, commonly the IDfSession
interface. Each session object gives a particular user access to a particular repository
and the objects in that repository.
Each session has a session identifier in the format Sn where n is an integer equal to
or greater than zero. This identifier is used in trace file entries, to identify the session
to which a particular entry applies. Session identifiers are not used or accepted in
DFC method calls.
Shared sessions can be used by more than one thread in an application. In web
applications, shared sessions are particularly useful because they allow multiple
components of the application to communicate. For example, a value entered in one
frame can affect a setting or field in another frame. Shared sessions also make the
most efficient use of resources.
Private sessions can be used by the application thread that obtained the session.
Using private sessions is only recommended if the application or thread must retain
complete control of the session state for a specific transaction.
Because some repositories have more than one Documentum Server and the servers
are often running on different host machines, DFC methods let you be specific when
requesting the connection. You can let the system choose which server to use or you
can identify a specific server by name or host machine or both.
During an explicit session, the tasks a user performs may require working with a
document or other object from another repository. When that situation occurs, DFC
seamlessly opens an implicit session for the user with the other repository. For
example, suppose you pass a reference to ObjectB from RepositoryB to a session
object representing a session with RepositoryA. In such cases, DFC will open an
implicit session with RepositoryB to perform the requested action on ObjectB.
Implicit sessions are managed by DFC and are invisible to the user and the
application. However, resource management is more efficient for explicit sessions
than for implicit sessions. Consequently, using explicit sessions, instead of relying
on implicit sessions, is recommended.
Both explicit and implicit sessions count towards the maximum number of allowed
sessions specified in the dfc.session.max_count configuration parameter.
The file is polled regularly to check for changes. The default polling interval is 30
seconds. The interval is configurable by setting a key in the dfc.properties file.
When DFC is initialized and a session is started, the information in this file is
propagated to the runtime configuration objects.
The client config object is created when DFC is initialized. The configuration values
in this object are derived primarily from the values recorded in the dfc.properties
file. Some of the properties in the client config object are also reflected in the server
config object.
The configuration values are applicable to all sessions started through that DFC
instance. The session config and the connection config objects represent individual
sessions with a repository. Each session has one session config object and one
connection config object. These objects are destroyed when the session is terminated.
The OpenText Documentum Server System Object Reference Guide lists the properties in
the configuration objects.
Objects obtained during a session are associated with the session and the session
manager under which the session was obtained. If you close a session and then
attempt to perform a repository operation on an object obtained during that session,
DFC opens an implicit session for the operation.
If a session was started with a single-use login ticket and that session times out, the
session cannot be automatically restarted by default because the login ticket cannot
be reused. To avoid this problem, an application can use resetPassword, an
IDfSession method. This method allows an application to provide either the actual
password for the user or another login ticket for the user. After the user connects
with the initial login ticket, the application can either:
• Generate a second ticket with a long validity period and then use resetPassword
to replace the single-use ticket
• Execute resetPassword to replace the single-use ticket with the actual password
of the user
Performing either option will make sure that the user is reconnected automatically if
the user session times out.
Connection brokers do not request information from Documentum Servers, but rely
on the servers to regularly broadcast their connection information to them. Which
connection brokers are sent server information is configured in the server config
object of the server.
Which connection brokers a client can communicate with is configured in the dfc.
properties file used by the client. You can define primary and backup connection
brokers in the file. Doing so ensures that users will rarely encounter a situation in
which they can not obtain a connection to a repository.
An application can also set the connection broker programmatically. This allows the
application to use a connection broker that may not be included in the connection
brokers specified in the dfc.properties file. The application must set the
connection broker information before requesting a connection to a repository.
Similarly, all client sessions, by default, request a native connection, but can be
configured in the same way as Documentum Server and connection brokers.
To request a secure connection, the client application must have the appropriate
value set in the dfc.properties file or must explicitly request a secure connection
when a session is requested. The security mode requested for the session is defined
in the IDfLoginInfo object used by the session manager to obtain the session.
The security mode requested by the client interacts with the connection type
configured for the server and connection broker to determine whether the session
request succeeds and what type of connection is established.
clients request a secure connection. The interaction between the Documentum Server
setting and the client request is described in the associated Javadocs, in the
description of the IDfLoginInfo.setSecurityMode method.
Whenever a session is released or disconnected, DFC puts the session into the
connection pool. This pool is divided into two levels. The first level is a
homogeneous pool. When a session is in the homogeneous pool, it can be reused
only by the same user. If, after a specified interval, the user has not reclaimed the
session, the session is moved to the heterogeneous pool (level-2 pool). From that
pool, the session can be claimed by any user.
When a session is claimed from the heterogeneous pool by a new user, DFC resets
automatically any security and cache-related information as needed for the new
user. DFC also resets the error message stack and rolls back any open transactions.
To obtain the best performance and resource management from connection pooling,
connection pooling must be enabled through the dfc.properties file. If connection
pooling is not enabled through the dfc.properties file, DFC only uses the
homogeneous pool. The session is held in that pool for a longer period of time, and
does not use the heterogeneous pool. If the user does not reclaim the session from
the homogeneous pool, the session is terminated.
When connection pooling is simulated using an assume method, the session is not
placed into the connection pool. Instead, ownership of the repository session passes
from one user to another by executing the assume method within the application.
When an assume method is issued, the system authenticates the requested new user.
If the user passes authentication, the system resets the security and cache
information for the session as needed. It also resets the error message stack.
Each login ticket has a scope that defines who can use the ticket and how many
times the ticket can be used. By default, login tickets may be used multiple times.
However, you can create a ticket configured for only one use. If a ticket is configured
for just one use, the ticket must be used by the issuing server or another designated
server.
Login tickets are generated in a repository session, at runtime, using one of the
getLoginTicket methods from the IDfSession interface.
The ASCII-encoded string is comprised of two parts: a set of values describing the
ticket and a signature generated from those values. The values describing the ticket
include information such as when the ticket was created, the repository in which it
was created, and who created the ticket. The signature is generated using the login
ticket key installed in the repository.
The scope of a login ticket defines which Documentum Servers accept the login
ticket. When you generate a login ticket, you can define its scope as:
A login ticket that can be accepted by any server of a trusted repository is called a
global login ticket. An application can use a global login ticket to connect to a
repository that differs from the ticket issuing repository if:
• The login ticket key (LTK) in the receiving repository is identical to the LTK in
the repository in which the global ticket was generated
• The receiving repository trusts the repository in which the ticket was generated
“Trusting and trusted repositories” on page 34, describes how trusted repositories
are defined and identified.
Login ticket keys are used with login tickets and application access tokens.
Login ticket keys are used to generate the Documentum Server signatures that are
part of a login ticket key or application access token. If you want to use login tickets
across repositories, the repository from which a ticket was issued and the repository
receiving the ticket must have identical login ticket keys. When a Documentum
Server receives a login ticket, it decodes the string and uses its login ticket key to
verify the signature. If the LTK used to verify the signature is not identical to the key
used to generate the signature, the verification fails.
Documentum Server supports two administration methods that allow you to export
a login ticket key from one repository and import it into another repository. The
methods are EXPORT_TICKET_KEY and IMPORT_TICKET_KEY. These methods
are also available as DFC methods in the IDfSession interface.
It is also possible to reset a repository LTK if needed. Resetting a key removes the
old key and generates a new key for the repository.
For example, suppose you configure a server so that login tickets created by that
server expire by default after 10 minutes and set the maximum validity period to 60
minutes. Now suppose that an application creates a login ticket while connected to
that server and sets the ticket validity period to 20 minutes. The value set by the
application overrides the default, and the ticket is valid for 20 minutes. If the
application attempts to set the ticket validity period to 120 minutes, the 120 minutes
is ignored and the login ticket is created with a validity period of 60 minutes.
If an application creates a ticket and does not specify a validity period, the default
period is applied to the ticket.
When a login ticket is generated, both its creation time and expiration time are
recorded as UTC time. This ensures that problems do not arise from tickets used
across time zones.
When a ticket is sent to a server other than the server that generated the ticket, the
receiving server tolerates up to a three-minute difference in time. That is, if the ticket
is received within three minutes of its expiration time, the ticket is considered valid.
This three-minute difference allows for minor differences in machine clock time
across host machines. However, it is the responsibility of the system administrators
to ensure that the machine clocks on host machines with applications and
repositories be set as closely as possible to the correct time.
This feature adds more flexibility to the use of login tickets by allowing you to create
login tickets that may be valid in some repositories and invalid in other repositories.
A ticket may be unexpired but still be invalid in a particular repository if that
repository has login_ticket_cutoff set to a date and time prior to the ticket creation
date.
To ensure that sort of security breach cannot occur, you can restrict superusers from
using a global login ticket to connect to a server.
Application access control (AAC) tokens are encoded strings that may accompany
connection requests from applications. The information in a token defines
constraints on the connection request. If a Documentum Server is configured to use
AAC tokens, any connection request received by that server from a non-superuser
must be accompanied by a valid token and the connection request must comply with
the constraints in the token.
If you configure a Documentum Server to use AAC tokens, you can control:
These constraints can be combined. For example, you can configure a token that only
allows members of a particular group using a particular application from a specified
host to connect to a server.
Application access control tokens are ignored if the user requesting a connection is a
superuser. A superuser can connect without a token to a server that requires a token.
If a token is provided, it is ignored.
When you create a token, you use arguments on the command line to define the
constraints that you want to apply to the token. The constraints define who can use
the token and in what circumstances. For example, if you identify a particular group
in the arguments, only members of that group can use the token. Or, you can set an
argument to constrain the token use to the host machine on which the token was
generated. If you want to restrict the token to use by a particular application, you
supply an application ID string when you generate the token, and any application
using the token must provide a matching string in its connection request. All of the
constraint parameters you specify when you create the token are encoded into the
token.
When an application issues a connection request to a server that requires a token, the
application may generate a token at runtime or it may rely on the client library to
append an appropriate token to the request. The client library also appends a host
machine identifier to the request.
If you want to constrain the use to a particular host machine, you must also set
the dfc.machine.id key in the dfc.properties file used by the client on that
host machine.
If the receiving server does not require a token or the user is a superuser, the server
ignores any token, application ID, and host machine ID accompanying the request
and processes the request as usual.
If the receiving server requires a token, the server decodes the token and determines
whether the constraints are satisfied. If the constraints are satisfied, the server allows
the connection. If not, the server rejects the connection request.
The ASCII-encoded string is comprised of two parts: a set of values describing the
token and a signature generated from those values. The values describing the token
include such information as when the token was created, the repository in which it
was created, and who created the token. (For troubleshooting purposes, DFC has the
IDfClient.getApplicationTokenDiagnostics method, which returns the encoded
values in readable text format.) The signature is generated using the repository login
ticket key.
If the scope of a token is a single repository, then the token is only accepted by
Documentum Servers of that repository. The application using the token can send its
connection request to any of the repository servers.
A global token can be used across repositories. An application can use a global token
to connect to repository other than the repository in which the token was generated,
if:
• The target repository is using the same login ticket key (LTK) as the repository in
which the global token was generated
• The target repository trusts the repository in which the token was generated
Repositories that accept tokens generated in other repositories must trust these
other repositories.
“Login ticket key” on page 29, describes the login ticket key. “Trusting and trusted
repositories” on page 34, describes how trust is determined between repositories.
To generate tokens for storage and later retrieval, use the dmtkgen utility. This
option is useful if you want to place a token on a host machine outside a firewall so
that users connecting from that machine are restricted to a particular application. It
is also useful for backwards compatibility. You can use stored tokens retrieved by
DFC to ensure that methods or applications written prior to version 5.3 can connect
to servers that now require a token.
The dmtkgen utility generates an XML file that contains a token. The file is stored in
a location identified by the dfc.tokenstorage_dir key in the dfc.properties file.
Token use is enabled by dfc.tokenstorage.enable key. If use is enabled, a token can
be retrieved and appended to a connection request by the DFC when needed.
Application access control tokens are valid for a given period of time. The period
may be defined when the token is generated. If not defined at that time, the period
defaults to one year, expressed in minutes. Unlike login tickets, you cannot
configure a default or maximum validity period for an application access token.
You can avoid the failure by setting up and enabling token retrieval by the DFC on
the host on which the method is executed. Token retrieval allows the DFC to append
a token retrieved from storage to the connection request. The token must be
generated by the dmtkgen utility and must be a valid token for the connection
request.
• You cannot perform any operation on a remote object if the operation results in
an update in the remote repository.
Opening an explicit transaction starts the transaction only for the current
repository. If you issue a method in the transaction that references a remote
object, work performed in the remote repository by the method is not under the
control of the explicit transaction. This means that if you abort the transaction,
the work performed in the remote repository is not rolled back.
• You cannot perform any of the following methods that manage objects in a
lifecycle: attach, promote, demote, suspend, and resume.
• You cannot issue a complete method for an activity if the activity is using XPath
to route a case condition to define the transition to the next activity.
• You cannot execute an IDfSysObject.assemble method that includes the
interruptFreq argument.
• You cannot use DFC methods in the transaction if you opened the transaction
with the DQL BEGIN[TRAN] statement.
If you want to use DFC methods in an explicit transaction, open the transaction
with a DFC method.
• You cannot execute dump and load operations inside an explicit transaction.
• You cannot issue a CREATE TYPE statement in an explicit transaction.
• You cannot issue an ALTER TYPE statement in an explicit transaction, unless the
ALTER TYPE statement lengthens a string property.
To put a database lock on an object, use the lockEx(true) method (in the
IDfPersistentObject interface). A superuser can lock any object with a database-level
lock. Other users must have at least Version permission on an object to place a
database lock on the object.
After an object is physically locked, the application can modify the properties or
content of the object. It is not necessary to issue a checkout method unless you want
to version the object. If you want to version an object, you must also check out the
object.
• A query that turns off full-text search and tries to read data from a table through
an index when another connection is locking the data while it tries to update the
index. When full-text search is enabled, properties are indexed and the table is
not queried.
• Two connections are waiting for locks being held by each other.
When deadlock occurs, Documentum Server executes internal deadlock retry logic.
The deadlock retry logic tries to execute the operations in the victim transaction up
to 10 times. If an error such as a version mismatch occurs during the retries, the
retries are stopped and all errors are reported. If the retry succeeds, an informational
message is reported.
Documentum Server provides a computed property that you can use in applications
to test for deadlock. The property is _isdeadlocked. This is a Boolean property that
returns TRUE if the repository session is deadlocked.
3.1.1 Object types with names beginning with dm, dmr, and
dmi
These object types are built-in types in a Documentum Server installation. Their type
definitions are relatively static. There are few changes that can be made to the
definition of a built-in type. For these types, the mechanism is an internal checking
process that periodically checks all the object type definitions in the Documentum
Server global cache. If any definitions are out-of-date, the process flushes the cache
and reloads the type definitions into the global cache. Changes to these types are not
visible to existing sessions because the DFC caches are not updated when the global
cache is refreshed.
Stopping and restarting a session makes any changes in the global cache visible. If
the session was a web-based client session, the web application server must be
restarted.
The interval at which the process runs is configurable by changing the setting of the
database_refresh_interval in the server.ini file.
For these types, the DFC shared cache is updated regularly, at intervals defined by
the dfc.cache.type.currency_check_interval key in the dfc.properties file. That key
defaults to 300 seconds (5 minutes). It can be reset using Documentum
Administrator.
Additionally, when requested in a fetch method, DFC checks the consistency of its
cached version against the server global cache. If the versions in the caches are found
to be mismatched, the object type definition is updated appropriately. If the server
cache is more current, the DFC caches are updated. If the DFC has a more current
version, the server cache is updated.
This mechanism ensures that a user who makes the change sees that change
immediately and other users in other sessions see it shortly thereafter. Stopping and
restarting a session or the web application server is not required to see changes
made to these objects.
• Object cache
An in-memory object cache is maintained for each repository session for the
duration of the repository session.
• Data dictionary caches
In conjunction with the object cache, DFC maintains a data dictionary cache. The
data dictionary cache is a shared cache, shared by all sessions in a multi-threaded
application. When an object is fetched, the DFC also fetches and caches in
memory the object associated data dictionary objects if they are not already in the
cache.
The consistency check rule can be a keyword, an integer value, or the name of a
cache config object. Using a cache config object to group cached data has the
following benefits:
The consistency checking process described in this section is applied to all objects in
the in-memory cache, regardless of whether the object is persistently cached or not.
“Determining if a consistency check is needed” on page 41, describes how the DFC
determines whether a check is needed. “Conducting consistency checks”
on page 42, describes how the check is conducted.
When a method defines a consistency check rule by naming a cache config object,
DFC first checks whether it has information about the cache config object in its
memory. If it does not, it issues a CHECK_CACHE_CONFIG administration method
to obtain the information. If it has information about the cache config object, DFC
must determine whether the information is current before using that information to
decide whether to perform a consistency check on the cached data.
To determine whether the cache config information is current, the DFC compares the
stored client_check_interval value to the timestamp on the information. If the
interval has expired, the information is considered out of date and DFC executes
another CHECK_CACHE_CONFIG method to ask Documentum Server to provide
current information about the cache config object. If the interval has not expired,
DFC uses the information that it has in memory.
After the DFC has current information about the cache config object, it determines
whether the cached data is valid. To determine that, the DFC compares the
timestamp on the cached data against the r_last_changed_date property value in the
cache config object. If the timestamp is later than the r_last_changed_date value, the
cached data is considered usable and no consistency check is performed. If the
timestamp is earlier than the r_last_changed_date value, a consistency check is
performed on the data.
DFC does not perform consistency checks on cached query results. If the cached
results are out of date, Documentum Server re-executes the query and replaces the
cached results with the newly generated results.
If a fetch method does not include an explicit value for the argument defining a
consistency check rule, the default is check_always. That means that DFC checks the
i_vstamp value of the in-memory object against the i_vstamp value of the object in
the repository. The default consistency rule is check_never. This means that DFC
uses the cached query results.
An object type represents a class of objects. The definition of an object type consists
of a set of properties, whose values describe individual objects of the type. Object
types are similar to templates. When you create an object in a repository, you
identify which type of object you want to create. Documentum Server uses the type
definition as a template to create the object, and then sets the properties for the
object to values specific to that object instance.
Most Documentum object types exist in a hierarchy. Within the hierarchy, an object
type is a supertype or a subtype or both. A supertype is an object type that is the
basis for another object type, called a subtype. The subtype inherits all the properties
of the supertype. The subtype also has the properties defined specifically for it. For
example, the dm_folder type is a subtype of dm_sysobject. It has all the properties
defined for dm_sysobject plus two defined specifically for dm_folder.
A type can be both a supertype and a subtype. For example, dm_folder is a subtype
of dm_sysobject and a supertype of dm_cabinet.
Most object types are persistent. When a user creates an object of a persistent type,
the object is stored in the repository and persists across sessions. A document that a
user creates and saves one day is stored in the repository and available in another
session on another day. The definitions of persistent object types are stored in the
repository as objects of type dm_type and dmi_type_info.
There are some object types that are not persistent. Objects of these types are created
at runtime when they are needed. For example, collection objects and query result
objects are not persistent. They are used at runtime to return the results of DQL
statements. When the underlying RDBMS returns rows for a SELECT statement,
Documentum Server places each returned row in a query result object and then
associates the set of query result objects with a collection object. Neither the
collection object nor the query result objects are stored in the repository. When you
close the collection, after all query result objects are retrieved, both the collection and
the query result objects are destroyed.
Lightweight and shareable object types are additional types added to Documentum
Server to solve common problems with large content stores. Specifically, these types
can increase the rate of object ingestion into a repository and can reduce the object
storage requirements.
“Storing lightweight subtype instances” on page 55, describes how lightweight and
shareable types are associated within the underlying database tables.
Lightweight objects are useful if you have a large number of properties that are
identical for a group of objects. This redundant information can be shared among
the LWSOs from a single copy of the shared parent object. For example, Enterprise
A-Plus Financial Services receives many payment checks each day. They record the
images of the checks and store the payment information in SysObjects. They will
retain this information for several years and then delete it. For their purposes, all
objects created on the same day can use a single ACL, retention information,
creation date, version, and other properties. That information is held by the shared
parent object. The LWSO has information about the specific transaction.
• Lightweight types take up less space in the underlying database tables than a
standard subtype.
• Importing lightweight objects into a repository is faster than importing standard
SysObjects.
• dm, which represents object types that are commonly used and visible to users
and applications.
• dmr, which represents object types that are generally read only.
• dmi, which represents object types that are used internally by Documentum
Server and Documentum client products.
• dmc, which represents object types installed to support a Documentum client
application. They are typically installed by a script when Documentum Server is
installed or when the client product is installed.
The use of “dm” as the first two characters in an object type name is reserved for
Documentum products.
The OpenText Documentum Server System Object Reference Guide has information on
the rules for naming user-defined object types and properties, and a description of
the dm_lightweight object type.
The content associated with an object is either primary content or renditions of the
primary content. The format of all primary content for any one object must have the
same file format. The renditions can be in any format. A rendition of a document is a
content file that differs from the source document content file only in its format.
If you want to create a document that has primary content in a variety of formats,
you must use a virtual document. Virtual documents are a hierarchical structure of
component documents that can be published as a single document. The component
documents can have different file formats.
4.2 Properties
Properties are the fields that comprise an object definition. The values in those fields
describe individual instances of the object type. When an object is created, its
properties are set to values that describe that particular instance of the object type.
For example, two properties of the document object type are title and subject. When
you create a document, you provide values for the title and subject properties that
are specific to that document.
An object type persistent properties include not only the properties defined for the
type, but also those that the type inherits from it supertype. If the type is a
lightweight object type, its persistent properties also include those it shares with its
sharing type.
Many object types also have associated computed properties. Computed properties
are nonpersistent. Their values are computed at runtime when a user requests the
property value and lost when the user closes the session.
“Objects and object types” on page 45, explains supertypes and inheritance. The
OpenText Documentum Server System Object Reference Guide contains information
about persistent and computed properties.
4.2.1.3 Datatype
All properties have a datatype that determines what kind of values can be stored in
the property. For example, a property with an integer datatype can only store whole
numbers. A property datatype is specified when the object type for which the
property is defined is created.
The OpenText Documentum Server System Object Reference Guide contains complete
information about valid datatypes and the limits and defaults for each datatype.
User-defined properties are read and write by default. Only superusers can add a
read-only property to an object type.
Both qualifiable and nonqualifiable properties can be full-text indexed, and both can
be referenced in the selected values list of a query statement. Similar to qualifiable
properties, selected nonqualifiable properties are returned by a query as a column in
a query result object. However, nonqualifiable properties cannot be referenced in an
expression in a qualification (such as in a WHERE clause) in a query unless the
query is a full-text DQL query.
The attr_restriction property in the dm_type object identifies the type properties as
either qualifiable or nonqualifiable.
Object replication creates replica objects, copies of objects that have been replicated
between repositories. When users change a global property in a replica, the change
actually affects the source object property. Documentum Server automatically
refreshes all the replicas of the object containing the property. When a repository
participates in a federation, changes to global properties in users and groups are
propagated to all member repositories if the change is made through the governing
repository using Documentum Administrator.
The identifier is an integer value stored in the attr_identifier property of each type
dm_type object. When a property is stored in a property bag, its identifier is stored
as a base64-encoded string in place of the property name.
“Property bag” on page 51, describes the property bag. The OpenText Documentum
Server System Object Reference Guide contains more information about property
identifiers.
You can store both single-valued and repeating property values in a property bag.
4.2.2.1 Implementation
The property bag is implemented in a repository as the i_property_bag property.
The i_property_bag property is part of the dm_sysobject type definition by default.
Consequently, each subtype of dm_sysobject inherits this property. That means that
you can define a subtype of the dm_sysobject or one of its subtypes that includes a
nonqualifiable property without specifically naming the i_property_bag property in
the subtype definition.
The i_property_bag property is also used to store aspect properties if the properties
are optimized for fetching. Consequently, the object type definitions of object
instances associated with the aspect must include the i_property_bag property. In
this situation, you must explicitly add the property bag to the object type before
associating its instances with the aspect.
It is also possible to explicitly add the property bag to an object type using an
ALTER TYPE statement.
“Aspects” on page 73, describes aspects and aspect properties. The OpenText
Documentum Server System Object Reference Guide contains the reference description
for the property bag property. The OpenText Documentum Server DQL Reference Guide
contains information on how to alter a type to add a property bag.
4.3 Repositories
A repository is where persistent objects managed by Documentum Server are stored.
A repository stores the object metadata and, sometimes, content files. A
Documentum system installation can have multiple repositories. Each repository is
uniquely identified by a repository ID, and each object stored in the repository is
identified by a unique object ID.
In the _r tables, there is a separate row for each value in a repeating property. For
example, suppose a subtype called recipe has one repeating property, ingredients. A
recipe object that has five values in the ingredients property will have five rows in
the recipe_r table-one row for each ingredient, as shown in the following table:
r_object_id ingredients
... 4 eggs
... 1 lb. cream cheese
... 2 t vanilla
... 1 c sugar
... 2 T grated orange peel
The r_object_id value for each row identifies the recipe that contains these five
ingredients.
If a type has two or more repeating properties, the number of rows in the _r table for
each object is equal to the number of values in the repeating property that has the
most values. The columns for repeating properties having fewer values are filled in
with NULLs.
For example, suppose the recipe type has four repeating properties: authors,
ingredients, testers, and ratings. One particular recipe has one author, four
ingredients, and three testers. For this recipe, the ingredients property has the
largest number of values, so this recipe object has four rows in the recipe_r table:
The server fills out the columns for repeating properties that contain a smaller
number of values with NULLs.
Even an object with no values assigned to any of its repeating properties has at least
one row in its type _r table. The row contains a NULL value for each of the repeating
properties. If the object is a SysObject or SysObject subtype, it has a minimum of two
rows in its type _r table because its r_version_label property has at least one value-
its implicit version label.
When a lightweight object shares a parent object with other lightweight objects, the
lightweight object is unmaterialized. All the unmaterialized lightweight objects
share the properties of the shared parent, so, in effect, the lightweight objects all
have identical values for the properties in the shared parent. This situation can
change if some operation needs to change a parent property for one of (or a subset
of) the lightweight objects. Since the parent is shared, the change in a property
would affect all the children. If the change only affects one child, that child object
has to have its own copy of the parent. When a lightweight object has its own
private copy of a parent, the object is materialized. Documentum Server creates rows
in the tables of the shared type for the object, copying the values of the shared
properties into those rows. The lightweight object no longer shares the property
values with the instance of the shared type, but with its own private copy of that
shared object.
When, or if, a lightweight object instance is materialized depends on the object type
definition. You can define a lightweight type such that instances are materialized
automatically when certain operations occur, only on request, or never.
The following is an example of how lightweight objects are stored and how
materialization changes the underlying database records. Note that this example
only uses the _s tables to illustrate the implementation. The implementation is
similar for _r tables.
Suppose the following shareable and lightweight object types exist in a repository:
cust_city string(32)
cust_state string(2)
cust_phone string(24)
cust_email string(100)
Instances of the order record type will share the values of instances of the customer
record object type. By default, the order record instances are unmaterialized. The
following figure shows how the unmaterialized lightweight instances are
represented in the database tables:
The order record instances represented by objID_2 and objID_3 share the property
values of the customer record instance represented by objID_B. Similarly, the order
record object instance represented by objID_5 shares the property values of the
customer record object instance represented by objID_Z. The i_sharing_type
property for the parent, or shared, rows in customer_record are set to reflect the fact
that those rows are shared.
Because the order record object type is defined for automatic materialization, certain
operations on an instance will materialize the instance. This does not create a new
order record instance, but instead creates a new row in the customer record table
that is specific to the materialized order record instance. Figure 4-3 illustrates how a
materialized instance is represented in the database tables.
Materializing the order record instances created new rows in the customer_record_s
table, one row for each order record object, and additional rows in each supertype
table in the type hierarchy. The object ID of each customer record object representing
a materialized order record object is set to the object ID of the order record object it
represents, to associate the row with the order record object. Additionally, the
i_sharing_type property of the previously shared customer record object is updated.
In the order record objects, the i_sharing_parent property is reset to the object ID of
the order record object itself.
The OpenText Documentum Server System Object Reference Guide contains information
about the identifiers recognized by Documentum Server.
On some databases, you can change the defaults when you create the repository. By
setting the server.ini file parameters before the initialization file is read during
repository creation, you can define:
You can define tablespaces for the object type tables based on categories of size or
for specific object types. For example, you can define separate tablespaces for the
object types categorized as large and another space for those categorized as small.
(The category designations are based on the number of objects of the type expected
to be included in the repository.) Or, you can define a separate tablespace for the
SysObject type and a different space for the user object type.
Additionally, you can change the size of the extent allotted to categories of object
types or to specific object types.
The OpenText Documentum Platform and Platform Extensions Installation Guide contains
instructions for changing the default location and extents of object type tables and
the locations of the index tables.
By default, when you create a repository, the system puts the type index tables in the
same tablespace as the object type tables. On certain platforms, you can define an
alternative location for the indexes during repository creation. Or, after the indexes
are created, you can move them manually using the MOVE_INDEX administration
method.
You can create additional indexes using the MAKE_INDEX administration method.
Using MAKE_INDEX is recommended instead of creating indexes through the
RDBMS server because Documentum Server uses the dmi_index table to determine
which properties are indexed. The MAKE_INDEX method allows you to define the
location of the new index.
Registered tables are RDBMS tables that are not part of the repository but are known
to Documentum Server. They are created by the DQL REGISTER statement and
automatically linked to the System cabinet in the repository. They are represented in
the repository by objects of type dm_registered.
After an RDBMS table is registered with the server, you can use DQL statements to
query the information in the table or to add information to the table.
The OpenText Documentum Server DQL Reference Guide contains information about
the REGISTER statement and querying registered tables.
The data dictionary is primarily for the use of client applications. Documentum
Server stores and maintains the data dictionary information but only uses a small
part-the default property values and the ignore_immutable values. The remainder of
the information is for the use of client applications and users.
Documentum provides a default set of data dictionary information for each of the
following locales:
• English
• French
• Italian
• Spanish
• German
• Japanese
• Korean
By default, when Documentum Server is installed, the data dictionary file for one of
the locales is installed also. The procedure determines which of the default locales is
most appropriate and installs that locale. The locale is identified in the dd_locales
property of the dm_docbase_config object.
The data dictionary support for multiple locales lets you store a variety of text
strings in the languages associated with the installed locales. For each locale, you can
store labels for object types and properties, some help text, and error messages.
• Install additional locales from the set of default locales provided with
Documentum Server or install custom locales
• Modify the information in an installed locale by adding to the information,
deleting the information, or changing the information
Some data dictionary information can be set using a text file that is read into the
dictionary. You can also set data dictionary information when an object type is
created or afterwards, using the ALTER TYPE statement.
For example, if a site has German and English locales installed, there will be two dd
type info objects for each object type-one for the German locale and one for the
English locale. Similarly, there will be two dd attr info objects for each property-one
for the German locale and one for the English locale. However, there will be only
one dd common info object for each object type and property because that object
stores the information that is common across all locales.
Applications query the dd common info, dd type info, and dd attr info objects to
retrieve and use data dictionary information. The OpenText Documentum Server
Administration and Configuration Guide contains information about publishing the
data dictionary.
Using DQL lets you obtain multiple data dictionary values in one query. However,
the queries are run against the current dmi_dd_type_info, dmi_dd_attr_info, and
dmi_dd_common_info objects. Consequently, a DQL query may not return the most
current data dictionary information if there are unpublished changes in the
information.
Neither DQL or DFC queries return data dictionary information about new object
types or added properties until that information is published, through an explicit
publishDataDictionary method (in the IDfSession interface) or through the
scheduled execution of the Data Dictionary Publisher job.
If you want to retrieve information for the locale that is the best match for the
current client session locale, use the DM_SESSION_DD_LOCALE keyword in the
query. For example:
SELECT "label_text" FROM "dmi_dd_attr_info"
WHERE "type_name"='dm_document' AND "nls_key"=DM_SESSION_DD_LOCALE
To ensure the query returns current data dictionary information, examine the
resync_needed property. If that property is TRUE, the information is not current and
you can republish before executing the query.
The OpenText Documentum Server DQL Reference Guide provides a full description of
the DM_SESSION_DD_LOCALE keyword.
4.6.1 Constraints
A constraint is a restriction applied to one or more property values for an instance of
an object type. Documentum Server does not enforce constraints. The client
application must enforce the constraint, using the constraint data dictionary
definition. You can provide an error message as part of the constraint definition for
the client application to display or log when the constraint is violated.
You can define a Check constraint in the data dictionary. Check constraints are most
often used to provide data validation. You provide an expression or routine in the
constraint definition that the client application can run to validate a given property
value.
You can define a check constraint at either the object type or property level. If the
constraint expression or routine references multiple properties, you must define the
constraint at the type level. If it references a single property, you can define the
constraint at either the property or type level.
You can define check constraints that apply only when objects of the type are in a
particular lifecycle state.
You can identify a default lifecycle for an object type and store that information in
the data dictionary. If an object type has a default lifecycle, when a user creates an
object of that type, the user can use the keyword “default” to identify the lifecycle
when attaching the object to the lifecycle. There is no need to know the lifecycle
object ID or name.
Note: Defining a default lifecycle for an object type does not mean that the
default is attached to all instances of the type automatically. Users or
applications must explicitly attach the default. Defining a default lifecycle for
an object type provides an easy way for users to identify the default lifecycle
for any particular type and a way to enforce business rules concerning the
appropriate lifecycle for any particular object type. Also, it allows you to write
an application that will not require revision if the default changes for an object
type.
Defining a default lifecycle for an object type is performed using the ALTER TYPE
statement.
The lifecycle defined as the default for an object type must be a lifecycle for which
the type is defined as valid. Valid types for a lifecycle are defined by two properties
in the dm_policy object that defines the lifecycle in the repository. The properties are
included_type and include_subtypes. A type is valid for a lifecycle if:
A classifier is constructed of the qual comp class_name property and a acronym that
represents the component build technology. For example, given a component whose
class_name is checkin and whose build technology is Active X, its classifier is
checkin.ACX.
You can specify only one component of each class for an object type.
For example, suppose an application includes a field that allows users to choose
between four resort sites: Malibu, French Riviera, Cancun, and Florida Keys. In the
repository, these sites may be identified by integers-Malibu=1, French Riviera=2,
Cancun=3, and Florida Keys=4. Rather than display 1, 2, 3, and 4 to users, you can
define mapping information in the data dictionary so that users see the text names of
the resort areas, and their choices are mapped to the integer values for use the by
application.
New object types are created using the CREATE TYPE statement. The OpenText
Documentum Server DQL Reference Guide contains information about CREATE TYPE
and the object types that are supported supertypes.
For system-defined object types, you cannot change the structure. You can only
change the default values of some properties. If the object type is a custom type, you
can change the structure and the default values. You can add properties, drop
properties, or change the length definition of character string properties in custom
object types.
Default aspects can be added to both system-defined object types and custom object
types. An aspect is a code module associated with object instances. If you add a
default aspect to an object type, that aspect is associated with each new instance of
the type or its subtypes.
Object types are altered using the ALTER TYPE statement. You must be either the
type owner or a superuser to alter a type.
The changes apply to the object type, the subtypes of the type, and all objects of the
type and its subtypes.
“Aspects” on page 73, describes aspects and default aspects. The OpenText
Documentum Server DQL Reference Guide contains information about ALTER TYPE
and the possible alterations that can be made to object types.
To drop a type, use the DROP TYPE statement. The OpenText Documentum Server
DQL Reference Guide contains information about the DROP TYPE statement.
In the DFC, the interface for each class of objects has a method that allows you to
instantiate a new instance of the object.
In DQL, you use the CREATE OBJECT method to create a new instance of an object.
The OpenText Documentum Server DQL Reference Guide contains information about
DQL and the reference information for the DQL statements, including CREATE
OBJECT.
In the DFC, the methods that change property values are part of the interface that
handles the particular object type. For example, to set the subject property of a
document, you use a method in the IDfSysObject interface.
In the DFC, methods are part of the interface for individual classes. Each interface
has methods that are defined for the class plus the methods inherited from its
superclass. The methods associated with a class can be applied to objects of the class.
The OpenText Documentum Foundation Classes Development Guide or the associated
Javadocs contains information about the DFC and its classes and interfaces.
The Document Query Language (DQL) is a superset of SQL. It allows you to query
the repository tables and manipulate the objects in the repository. DQL has several
statements that allow you to create objects. There are also DQL statements you can
use to update objects by changing property values or adding content.
Creating or updating an object using DQL instead of the DFC is generally faster
because DQL uses one statement to create or modify and then save the object. Using
DFC methods, you must issue several methods-one to create or fetch the object,
several to set its properties, and a method to save it.
You must either be the owner of an object or you must have Delete permission on
the object to destroy it. If the object is a cabinet, you must also have the Create
Cabinet privilege.
Any SysObject or subtype must meet the following conditions before you can
destroy it:
Destroying an object removes the object from the repository and also removes any
relation objects that reference the object. (Relation objects are objects that define a
relationship between two objects.) Only the explicit version is removed. Destroying
an object does not remove other versions of the object. To remove multiple versions
of an object, use a prune method. “Removing versions” on page 120, describes how
the prune method behaves. The OpenText Documentum Server System Object Reference
Guide contains information about relationships.
By default, destroying an object does not remove the object content file or content
object that associated the content with the destroyed object. If the content was not
shared with another document, the content file and content object are orphaned. To
remove orphaned content files and orphaned content objects, run the dmclean and
dmfilescan utilities as jobs, or manually. The OpenText Documentum Server
Administration and Configuration Guide contains information about the dmclean and
dmfilescan jobs and how to execute the utilities manually.
However, if the content file is stored in a storage area with digital shredding enabled
and the content is not shared with another object, destroying the object also removes
the content object from the repository and shreds the content file.
When the object you destroy is the original version, Documentum Server does not
actually remove the object from the repository. Instead, it sets the object i_is_deleted
property to TRUE and removes all associated objects, such as relation objects, from
the repository. The server also removes the object from all cabinets or folders and
places it in the Temp cabinet. If the object is carrying the symbolic label CURRENT,
it moves that label to the version in the tree that has the highest r_modify_date
property value. This is the version that has been modified most recently.
Note: If the object you want to destroy is a group, you can also use the DQL
DROP GROUP statement.
• The new type must have the same type identifier as the current type.
A type identifier is a two-digit number that appears as the first two digits of an
object ID. For example, the type identifier for all documents and document
subtypes is 09. Consequently, the object ID for every document begins with 09.
• The new type must be either a subtype or supertype of the current type.
This means that type changes cannot be lateral changes in the object hierarchy.
For example, if two object types, A and B, are both direct subtypes of
mybasetype, you cannot change an object of type A directly to type B.
• The object that you want to change cannot be immutable (unchangeable).
“Changeable versions” on page 121, describes immutability and which objects are
changeable.
The following figure shows an example of a type hierarchy. In this example, you can
change subtype_A to either baseSubtype1 or mybasetype. Similarly, you can change
baseSubtype1 to either subtype_A or mybasetype, or mybasetype to either
baseSubtype1 or baseSubtype2. However, you cannot change baseSubtype1 to
baseSubtype2 or Subtype_B to Subtype_C because these types are peers on the
hierarchy. Lateral changes are not allowed. Only vertical changes within the
hierarchy are allowed.
Using the business object framework to create customized modules provides the
following benefits:
• The customizations are independent of the client applications, removing the need
to code the customization into the client applications.
• The customizations can be used to extend core Documentum Server and DFC
functionality.
To allow you to easily test modules in BOF development mode, DFC and
Documentum Server support a development registry. This is a file that lists
implementation classes to use during development. It loads the classes from the
local classpath rather than being downloaded from a repository. The OpenText
Documentum Foundation Classes Development Guide contains details on using this
mode.
A TBO provides functionality that is specific to an object type. For example, a TBO
might be used to validate the title, subject, and keywords properties of a custom
document subtype.
A BOF module is comprised of the Java archive (JAR) files that contain the
implementation classes and the interface classes for the behavior the module
implements, and any interface classes on which the module depends. The module
may also include Java libraries and documentation.
SBOs are installed in the repository that is the global registry. Simple modules,
TBOs, and aspects are installed in each repository that contains the object type or
objects whose behavior you want to modify.
Installing a BOF module creates a number of repository objects. The top-level object
is a dmc_module object. Module objects are subtypes of dm_folder. They serve as a
container for the BOF module. The properties of a module object provide
information about the BOF module it represents. For example, they identify the
module type (SBO, TBO, aspect, or simple), its implementation class, the interfaces it
implements, and any modules on which the module depends.
The module folder object is placed in the repository in /System/Modules, under the
appropriate subfolder. For example, if the module represents an TBO and its name is
MyTBO, it is found in /System/Modules/TBO/MyTBO.
Each JAR file in the module is represented by a dmc_jar object. A jar object has
properties that identify the Java version level required by the classes in the module
and whether the JAR file contains implementation or interface classes, or both.
The jar objects representing the module implementation and interface classes are
linked directly to the dmc_module folder. The jar objects representing the JAR files
for supporting software are linked to folders represented by dmc_java_library
objects. The java library objects are then linked to the top-level module folder. The
following figure illustrates these relationships.
The properties of a Java library object allow you to specify whether you want to
sandbox the libraries linked to that folder. Sandboxing refers to loading the library
into memory in a manner that makes it inaccessible to any application other than the
application that loaded it. DFC achieves sandboxing by using a standard BOF class
loader and separate class loaders for each module. The class loaders try to load
classes first, before delegating to the usual hierarchy of Java class loaders.
In addition to installing the modules in a repository, you must also install the JAR
file for a module interface classes on each client machine running DFC, and the file
must be specified in the client CLASSPATH environment variable.
BOF modules are delivered dynamically to client applications when the module is
needed. The delivery mechanism relies on local caching of modules, on client
machines. DFC does not load TBOs, aspects, or simple modules into the cache until
an application tries to use them. After a module is loaded, DFC checks for updates to
the modules in the local cache whenever an application tries to use a module or after
the interval specified by the dfc.bof.cache.currency_check_interval property in the
dfc.properties file. The default interval value is 30 seconds. If a module has
changed, only the changed parts are updated in the cache.
The location of the local cache is specified in the dfc.properties file, in the
dfc.data.cache_dir property. The default value is the cache subdirectory of the
directory specified in the dfc.data.dir property. All applications that use a particular
DFC installation share the cache.
contains instructions for packaging and deploying modules and information about
deploying the interface classes to a client machine.
An SBO associates an interface with an implementation class. SBOs are stored in the
global registry, in a folder under /System/Modules/SBO. The name of the folder is
the name of the SBO. The name of the SBO is typically the name of the interface.
Because TBOs are specific to an object type, they are stored in each repository that
contains the specified object type. They are stored in a folder under the /System/
Modules/TBO. The folder name is the name of the TBO, which is typically the name
of the object type for which it was created.
5.4.4 Aspects
An aspect is a BOF module that customizes behavior or records metadata or both for
an instance of an object type.
You can attach an aspect to any object of type dm_sysobject or its subtypes. You can
also attach an aspect to custom-type objects if the type has no supertype and you
have issued an ALTER TYPE statement to modify the type to allow aspects.
An object can have multiple aspects attached, but can not have multiple instances of
one aspect attached. That is, given object X and aspects a1, a2, and a3, you can attach
a1, a2, and a3 to object X, but you cannot attach any of the aspects to object X more
than once.
The OpenText Documentum Server DQL Reference Guide describes the syntax and use
of the ALTER TYPE statement.
Note: You cannot define properties for aspects whose names contain a dot (.).
For example, if the aspect name is “com.mycompany.policy”, you can not
define properties for that aspect.
Aspect properties are not fulltext-indexed by default. If you want to include the
values in the index, you must use explicitly identify which properties you want
indexed. You can use Documentum Composer or ALTER ASPECT to do this. The
OpenText Documentum Server DQL Reference Guide describes the syntax and use of the
ALTER ASPECT statement.
At the time you add properties to an aspect, you can choose to optimize
performance for fetching or querying those properties by including the
OPTIMIZEFETCH keyword in the ALTER ASPECT statement. That keyword directs
Documentum Server to store all the aspect properties and their values in the
property bag of any object to which the aspect is attached, if the object has a
property bag.
An object type may have multiple default aspects. An object type inherits all the
default aspects defined for its supertypes, and may also have one or more default
aspects defined directly for itself. All of a type default aspects are applied to any
instances of the type.
When you add a default aspect to a type, the newly added aspect is only associated
with new instances of the type or subtype created after the addition. Existing
instances of the type or its subtypes are not affected.
If you remove a default aspect from an object type, existing instances of the type or
its subtypes are not affected. The aspect remains attached to the existing instances.
Simple modules associate an interface with an implementation class. They are stored
in each repository to which they apply, and are stored in /System/Modules. The
folder name is the name of the module.
Security services
6.1 Overview
The security features supported by Documentum Server maintain system security
and the integrity of the repository. They also provide accountability for user actions.
Documentum Server supports:
Feature Description
User authentication User authentication is the verification that
the user is a valid repository user. User
authentication occurs automatically,
regardless of whether repository security is
active. “User authentication” on page 83,
describes user authentication in more detail.
Password encryption Password encryption protects passwords
stored in a file. Documentum Server
automatically encrypts the passwords it uses
to connect to third-party products, such as an
LDAP directory server or the RDBMS, and
the passwords used by internal jobs to
connect to repositories. Documentum Server
also supports encryption of other passwords
through methods and a utility. “Password
encryption” on page 84, provides more
information about password encryption.
Application-level control of SysObjects Application-level control of SysObjects is an
optional feature that you can use in client
applications to ensure that only approved
applications can handle particular
documents or objects. “Application-level
control of SysObjects” on page 85, describes
application level control of objects in more
detail.
Feature Description
User privileges User privileges define what special functions,
if any, a user can perform in a repository. For
example, a user with Create Cabinet user
privileges can create cabinets in the
repository. “User privileges” on page 86,
contains information about user privileges.
Object-level permissions Object-level permissions define which users
and groups can access a SysObject and which
level of access those users have. “Object-level
permissions” on page 87, contains
information about object-level permissions.
Table permits Table permits are a set of permits applied
only to registered tables, RDBMS tables that
have been registered with Documentum
Server. “Table permits” on page 89,
describes table permits.
Dynamic groups Dynamic groups are groups whose
membership can be controlled at runtime.
Access Control Lists (ACLs) Object-level permissions are assigned using
ACLs. Every SysObject in the repository has
an ACL. The entries in the ACL define the
access to the object. “ACLs” on page 91,
describes ACLs.
Folder security Folder security is an adjunct to repository
security. “Folder security” on page 90,
describes folder security.
Auditing and tracing facilities Auditing and tracing are optional features
that you can use to monitor the activity in
your repository. “Auditing and tracing”
on page 96, provides an overview of the
auditing and tracing facilities.
Support for simple electronic signoffs and Documentum Server supports three options
digital signatures for electronic signatures. Support for simple
signoffs, which use the
IDfPersistentObject.signoff method, and for
digital signatures, which is implemented
using third-party software in a client
application, are provided as standard
features of Documentum Server. Support for
the third option, using the
IDfSysObject.addESignature method, is only
available with a Trusted Content Services
license, and is not available on the Linux
platform. “Signature requirement support”
on page 97, discusses all three options
supporting signature requirements.
Feature Description
Secure Socket Layer (SSL) communications When you install Documentum Server, the
between Documentum Server and the client installation procedure creates two service
library (DMCL) on client hosts names for Documentum Server. One
represents a native, nonsecure port and the
other a secure port. You can then configure
the server and clients, through the server
config object and dmcl.ini files, to use the
secure port.
Privileged Documentum Foundation Classes This feature allows DFC to run under a
(DFC) privileged role, which gives escalated
permissions or privileges for a specific
operation. “Privileged DFC” on page 104,
describes privileged DFC in detail.
“Users and groups” on page 80, contains information about users and groups,
including dynamic groups. The OpenText Documentum Server Administration and
Configuration Guide contains more information about setting the connection mode for
servers and configuring clients to request a native or secure connection.
The following table lists the security features supported by a Trusted Content
Services license.
Feature Description
Encrypted file store storage areas Using encrypted file stores provides a way to
ensure that content stored in a file store is not
readable by users accessing it from the
operating system. Encryption can be used on
content in any format except rich media
stored in a file store storage area. The storage
area can be a standalone storage area or it
can be a component of a distributed store.
“Encrypted file store storage areas”
on page 107, describes encrypted storage
areas in detail.
Digital shredding of content files Digital shredding provides a final, complete
way of removing content from a storage area
by ensuring that deleted content files can not
be recovered by any means. “Digital
shredding” on page 108, provides a
description of this feature.
Feature Description
Electronic signature support using the The addESignature method is used to
IDfSysObject.addESignature method implement an electronic signature
requirement through Documentum Server.
The method creates a formal signature page
and adds that page as primary content (or a
rendition) to the signed document. The
signature operation is audited, and each time
a new signature is added, the previous
signature is verified first. “Signature
requirement support” on page 97, describes
how electronic signatures are supported by
addESignature work.
This section provides an overview of how users and groups are implemented.
6.3.1 Users
This section introduces repository users.
A repository user is an actual person or a virtual user who is defined as a user in the
repository. A virtual user is a repository user who does not exist as an actual person.
Repository users have two states, active and inactive. An active user can connect to
the repository and work. An inactive user is not allowed to connect to the repository.
The properties of a user object record information that allows Documentum Server
to manage the user access to the repository and to communicate with the user when
necessary. For example, the properties define how the user is authenticated when
the user requests repository access. They also record the user state (active or
inactive), the user email address (allowing Documentum Server to send automated
emails when needed), and the user home repository (if any).
The OpenText Documentum Server System Object Reference Guide describes the
properties defined for the dm_user object type.
6.3.2 Groups
Groups are sets of users or groups or a mixture of both. They are used to assign
permissions or client application roles to multiple users. There are several classes of
groups in a repository. A group class is recorded in its group_class property. For
example, if group_class is “group,” the group is a standard group, used to assign
permissions to users and other groups.
A group, similar to an individual user, can own objects, including other groups. A
member of a group that owns an object or group can manipulate the object just as an
individual owner. The group member can modify or delete the object.
• Standard groups:
A standard group consists of a set of users. The users can be individual users or
other groups or both. A standard group is used to assign object-level permissions
to all members of the group. For example, you might set up a group called engr
and assign Version permission to the engr group in an ACL applied to all
engineering documents. All members of the engr group then have Version
permission on the engineering documents.
Standard groups can be public or private. When a group is created by a user with
Sysadmin or Superuser privileges, the group is public by default. If a user with
Create Group privileges creates the group, it is private by default. You can
override these defaults after a group is created using the ALTER GROUP
statement. The OpenText Documentum Server DQL Reference Guide describes how
to use ALTER GROUP.
• Role groups:
A role group contains a set of users or other groups or both that are assigned a
particular role within a client application domain. A role group is created by
setting the group_class property to role and the group_name property to the role
name.
• Module role groups:
A module role group is a role group that is used by an installed BOF module. It
represents a role assigned to a module of code, rather than a particular user or
group. Module role groups are used internally. The group_class value for these
groups is module role.
• Privileged groups:
A privileged group is a group whose members are allowed to perform privileged
operations even though the members do not have the privileges as individuals. A
privileged group has a group_class value of privilege group.
• Domain groups:
A domain group represents a particular client application domain. A domain
group contains a set of role groups corresponding to the roles recognized by the
client application.
• Dynamic groups:
A dynamic group is a group, of any group class, with a list of potential members.
A setting in the group definition defines whether the potential members are
treated as members of the group or not when a repository session is started.
Depending on that setting, an application can issue a session call to add or
remove a user from the group when the session starts.
A nondynamic group cannot have a dynamic group as a member. A dynamic
group can include other dynamic groups as members or nondynamic groups as
members. However, if a nondynamic group is a member, the members of the
nondynamic group are treated as potential members of the dynamic group.
• Local and global groups:
Role and domain groups are used by client applications to implement roles within
an application. The two kinds of groups are used together to achieve role-based
functionality. Documentum Server does not enforce client application roles.
For example, suppose you write a client application called report_generator that
recognizes three roles: readers (users who read reports), writers (users who write
and generate reports), and administrators (users who administer the application). To
support the roles, you create three role groups, one for each role. The group_class is
set to role for these groups and the group names are the names of the roles: readers,
writers, and administrators. Then, create a domain group by creating a group whose
group_class is domain and whose group name is the name of the domain. In this
case, the domain name is report_generator. The three role groups are the members
of the report_generator domain group.
When a user starts the report_generator application, the application examines its
associated domain group and determines the role group to which the user belongs.
The application then performs only the actions allowed for members of that role
group. For example, the application customizes the menus presented to the user
depending on the role to which the user is assigned.
Note: Documentum Server does not enforce client application roles. It is the
responsibility of the client application to determine if there are role groups
defined for the application and apply and enforce any customizations based on
those roles.
There are several ways to configure user authentication, depending on your choice
of authentication mechanism. For example, if you are authenticating against the
operating system, you can write and install your own password checking program.
If you use an LDAP directory server, you can configure the directory server to use
an external password checker or to use a secure connection with Documentum
Server. If you use a plug-in module, you can use the module provided with
Documentum Server or write and install a custom module.
To protect the repository, you can enable a feature that limits the number of failed
authentication attempts. If the feature is enabled and a user exceeds the limit, the
user account is deactivated in the repository.
Client applications can use password encryption for their own password by using
the DFC method IDfClient.encryptPassword. The method allows you to use
encryption in your applications and scripts. Use encryptPassword to encrypt
passwords used to connect to a repository. All the methods that accept a repository
password accept a password encrypted using the encryptPassword method. The
DFC will automatically perform the decryption.
Passwords are encrypted using the Administration Encryption Key (AEK). The AEK
is installed during Documentum Server installation. After encrypting a password,
Documentum Server also encodes the encrypted string using Base64 before storing
the result in the appropriate password file. The final string is longer than the clear
text source password.
Each application that requires control over the objects it manipulates has an
application code. The codes are used to identify which application has control of an
object and to identify which controlled objects can be accessed from a particular
client.
To identify to the system which objects it can modify, an application sets the
dfc.application_code key in the client config object or the application_code property
in the session config object when the application is started. (Setting the property in
the client config object, rather than the session config object, provides performance
benefits, but affects all sessions started through that DFC instance.) The key and the
property are repeating. On start-up, an application can add multiple entries for the
key or set the property to multiple application codes if users are allowed to modify
objects controlled by multiple applications through that particular application.
User privileges are always enforced whether repository security is turned on or not.
The basic user privileges are additive, not hierarchical. For example, granting Create
Group to a user does not give the user Create Cabinet or Create Type privileges. If
you want a user to have both privileges, you must explicitly give the user both
privileges.
Typically, the majority of users in a repository have None as their privilege level.
Some users, depending on their job function, will have one or more of the higher
privileges. A few users will have either Sysadmin or Superuser privileges.
Applications and methods that are executed with Documentum Server as the server
always have Superuser privileges.
The extended user privileges are not hierarchical. For example, granting a user
Purge Audit privilege does not confer Config Audit privilege also.
Repository owners, superusers, and users with the View Audit permission can view
all audit trail entries. Other users in a repository can view only those audit trail
entries that record information about objects other than ACLs, groups, and users.
Only repository owners and superusers can grant and revoke extended user
privileges, but they can not grant or revoke these privileges for themselves.
Each SysObject (or SysObject subtype) object has an associated ACL. For most
sysObject subtypes, the permissions control the access to the object. For dm_folder,
however, the permissions are not used to control access unless folder security is
enabled. In such cases, the permissions are used to control specific sorts of access,
such as the ability to link a document to the folder.
“ACLs” on page 91, describes ACLs in more detail. “Folder security” on page 90,
provides more information about folder security. The associated Javadocs for the
IDfSysObject.link and IDfSysObject.unlink methods contain a description of
privileges necessary to link or unlink an object.
There are two kinds of object-level permissions: base permissions and extended
permissions.
These permissions are hierarchical. For example, a user with Version permission also
has the access accompanying Read and Browse permissions. Or, a user with Write
permission also has the access accompanying Version permission.
Permission Description
Change Location In conjunction with the appropriate base
object-level permissions, allows the user to
move an object from one folder to another.
Permission Description
Change Permission The user can change the basic permissions of
the object.
Change State The user can change the document lifecycle
state of the object.
Delete Object The user can delete the object. The delete
object extended permission is not equivalent
to the base Delete permission. Delete Object
extended permission does not grant Browse,
Read, Relate, Version, or Write permission.
Execute Procedure The user can run the external procedure
associated with the object.
The extended permissions are not hierarchical. You must assign each explicitly.
• At least Browse access for the dm_registered object representing the RDBMS
table
• The appropriate table permit for the operation that you want to perform
Note: Superusers can access all RDBMS tables in the database using a SELECT
statement regardless of whether the table is registered or not.
There are five levels of table permits, described in the following table.
The permits are identified in the dm_registered object that represents the table, in
the owner_table_permit, group_table_permit, and world_table_permit properties.
The permits are not hierarchical. For example, assigning the permit to insert does
not confer the permit to update. To assign more than one permit, you add the
integers representing the permits you want to assign, and set the appropriate
property to the total. For example, if you want to assign both insert and update
privileges as the group table permit, set the group_table_permit property to 6, the
sum of the integer values for the update and insert privileges.
Folder security does not prevent users from working with objects in a folder. It
provides an extra layer of security for operations that involve linking or unlinking,
such as creating a new object, moving an object, deleting an object, and copying an
object.
Folder security is turned on and off at the repository level, using the folder_security
property in the docbase config object.
6.11 ACLs
An Access Control List (ACL) is the mechanism that Documentum Server uses to
impose object-level permissions on SysObjects. An ACL has one or more entries that
identify a user or group and the object-level permissions accorded that user or group
by the ACL. Another name for an ACL is a permission set. An ACL is a set of
permissions that apply to an object.
Each SysObject has an ACL. The ACL assigned to a SysObject is used to control
access to that object. For folders, the assigned ACL serves additional functions. If
folder security is enabled, the ACL assigned to the folder sets the folder security
permissions. If the default ACL for the Documentum Server is configured as Folder,
then newly created objects in the folder are assigned the folder ACL.
An ACL is represented in the repository as an object of type dm_acl. ACL entries are
recorded in repeating properties in the object. Each ACL is uniquely identified
within the repository by its name and domain. (The domain represents the owner of
the ACL.) When an ACL is assigned to an object, the object acl_name and
acl_domain properties are set to the name and domain of the ACL.
After an ACL is assigned to an object, the ACL can be changed. You can modify the
ACL itself or you can remove it and assign a different ACL to the object.
AccessPermit and ExtendedPermit entries grant the base and extended permissions.
Creating, modifying, or deleting AccessPermit and ExtendedPermit entries is
supported by all Documentum Servers.
The remaining entry types provide extended capabilities for defining access. For
example, an AccessRestriction entry restricts a user or group access to a specified
level even if that user or group is granted a higher level by another entry. You can
create, modify, or delete any entry other than an AccessPermit or ExtendedPermit
entry.
• External ACLs are created explicitly by users. The name of an external ACL is
determined by the user. External ACLs are managed by users, either the user
who creates them or superusers.
• Internal ACLs are created by Documentum Server. Internal ACLs are created in a
variety of situations. For example, if a user creates a document and grants access
to the document to HenryJ, Documentum Server assigns an internal ACL to the
document. (The internal ACL is derived from the default ACL with the addition
of the permission granted to HenryJ.) The names of internal ACL begin with
dm_. Internal ACLs are managed by Documentum Server.
The external and internal ACLs are further characterized as public or private ACLs:
• Public ACLs are available for use by any user in the repository. Public ACLs
created by the repository owner are called system ACLs. System ACLs can only
be managed by the repository owner. Other public ACLs can be managed by
their owners or a user with Sysadmin or Superuser privileges.
• Private ACLs are created and owned by a user other than the repository owner.
However, unlike public ACLs, private ACLs are available for use only by their
owners, and only their owners or a superuser can manage them.
• Access Restrictions
• Required Groups
• Required Group Set
• Application Permit
• Application Restrictions
• Indicates that the specified user (or group) can not have a particular permission.
• Similar in behavior to Windows allow/deny paradigm.
• Does not imply any permission but simply denies a permit.
For example:
• Security Labels: Top Secret group requires member to be inside the firewall.
If application detects that user is inside the firewall, application adds the user to
the dynamic group for that session.
• Roles-based security can be implemented by Dynamic Groups
Document permissions can be granted to Roles and application can decide when
to place the session user under that role, giving the user access to that role’s data.
6.12.1 Auditing
Auditing is the process of recording the occurrence of system and application events
in the repository. Events are operations performed on objects in a repository or
something that happens in an application. System events are events that
Documentum Server recognizes and can audit. Application events are user-defined
events. They are not recognized by Documentum Server and must be audited by an
application.
Documentum Server audits a large set of events by default. For example, all
successful addESignature events and failed attempts to execute addESignature
events are audited. Similarly, all executions of methods that register or unregister
events for auditing are themselves audited.
You can also audit many other operations. For example, you can audit:
There are several methods in the IDfAuditTrailManager interface that can be used to
request auditing. For example, the registerEventForType method starts auditing a
particular event for all objects of a specified type. Typically, you must identify the
event you want to audit and the target of the audit. The event can be either a system
event or an application (user-defined) event. The target can be a particular object, all
objects of a particular object type, or objects that satisfy a particular query.
The audit request is stored in the repository in registry objects. Each registry object
represents one audit request.
Issuing an audit request for a system event initiates auditing for the event. If the
event is an application event, the application is responsible for checking the registry
objects to determine whether auditing is requested for the event and, if so, create the
audit trail entry.
The records of audited events are stored in the repository as entries in an audit trail.
The entries are objects of dm_audittrail, dm_audittrail_acl, or dm_audittrail_group.
Each entry records the information about one occurrence of an event. The
information is specific to the event and can include information about property
values in the audited object.
6.12.2 Tracing
Tracing is an feature that logs information about operations that occur in
Documentum Server and DFC. The information that is logged depends on which
tracing functionality is turned on.
DFC has a robust tracing facility that allows you to trace method operations and
RPC calls. The facility allows you to configure many options for the generated trace
files. For example, you can trace by user or thread, specify stack depth to be traced,
and define the format of the trace file.
Digital signatures are electronic signatures in formats such as PDKS #7, XML
signature, or PDF signature. Digital signatures are generated by third-party
products called when an addDigitalSignature method is executed. Use this option if
you want to implement strict signature support in a client application.
Simple sign-offs are the least rigorous way to supply an electronic signature. Simple
sign-offs are implemented using the IDfPersistentObject.signoff method. This
method authenticates a user signing off a document and creates an audit trail entry
for the dm_signoff event.
Electronic signatures are the most rigorous signature requirement that Documentum
Server supports. The electronic signature feature requires a Trusted Content Services
license.
All the work of generating the signature page and handling the content is performed
by Documentum Server. The client application is only responsible for recognizing
the signature event and issuing the addESignature method. A typical sequence of
operations in an application using the feature is:
1. Authenticates the user and verifies that the user has at least Relate permission on
the document to be signed.
If a user name is passed in the addESignature method arguments, that user must
be the same as the session user issuing the addESignature method.
2. Verifies that the document is not checked out.
A checked out document cannot be signed by addESignature.
3. Verifies that the pre_signature hash argument, if any, in the method, matches a
hash of the content in the repository.
4. If the content has been previously signed, the server:
• Retrieves all the audit trail entries for the previous dm_addesignature events
on this content.
• Verifies that the most recent audit trail entry is signed (by Documentum
Server) and that the signature is valid.
• Verifies that the entries have consecutive signature numbers.
• Verifies that the hash in the audit trail entry matches the hash of the
document content.
5. Copies the content to be signed to a temporary directory location and calls the
signature creation method. The signature creation method:
• Generates the signature page using the signature page template and adds the
page to the content.
• Replaces the content in the temporary location with the signed content.
6. If the signature creation method returns successfully, the server replaces the
original content in the repository with the signed copy.
If the signature is the first signature applied to that particular version of the
document, Documentum Server appends the original, unsigned content to the
document as a rendition with the page modifier set to dm_sig_source.
7. Creates the audit trail entry recording the dm_addesignature event.
The entry also includes a hash of the newly signed content.
You can trace the operations of addESignature and the called signature creation
method.
The Documentum system provides a default signature page template and a default
signature creation method with Documentum Server so you can use the electronic
signature feature with no additional configuration. The only requirement for using
the default functionality is that documents to be signed must be in PDF format or
have a PDF rendition associated with their first primary content page.
In the repository, the Microsoft Word document that is the source of the PDF
template is an object of type dm_esign_template. It is named Default Signature Page
Template and is stored in
Integration/Esignature/Templates
The signature creation method uses the location object named SigManifest to locate
the Fusion library. The location object is created during repository configuration.
The signature creation method checks the number of signatures supported by the
template page. If the maximum number is not exceeded, the method generates a
signature page and adds that page to the content file stored in the temporary
location by Documentum Server. The method does not read the content from the
repository or store the signed content in the repository.
When the method creates the signature page, it appends or prepends the signature
page to the PDF content. (Whether the signature page is added at the front or back of
the content to be signed is configurable.) After the method completes successfully,
Documentum Server adds the content to the document:
• If the signature is the first signature on that document version, the server
replaces the original PDF content with the signed content and appends the
original PDF content to the document as a rendition with the page modifier
dm_sig_source.
• If the signature is a subsequent addition, the server simply replaces the
previously signed PDF content with the newly signed content.
Documentum Server uses the generic string properties in the audit trail entry to
record information about the signature. The following table lists the use of those
properties for a dm_addesignature event.
If you want to embed a signature in content that is not in PDF format, you must use
a custom signature creation method. You can also create a custom signature page
template for use by the custom signature creation method, although using a
template is not required.
• Checks that the hash values of the source content and signed content stored in
the audit trail entry match those of the source and signed content in the
repository.
Only the most recent signature is verified. If the most recent signature is valid,
previous signatures are guaranteed to be valid.
It is possible to require Documentum Server to sign the generated audit trail entries.
Because the addDigitalSignature method is audited by default, there is no explicit
registry object for the event. However, if you want Documentum Server to sign
audit trail entries for dm_adddigsignature events, you can issue an explicit method
requesting auditing for the event.
You can use a simple sign-off on any SysObject or SysObject subtype. A user must
have at least Read permission on an object to perform a simple sign-off on the object.
Supporting privileged DFC is a set of privileged group, privileged roles, and the
ability to define type-based objects and simple modules as privileged modules, as
follows:
• Privileged modules are modules that use one or more escalated permissions or
privileges to execute.
By default, each DFC is installed with the ability to request escalated privileges
enabled. However, to use the feature, the DFC must have a registration in the global
registry. That registration information must be defined in each repository in which
the DFC will exercise those privileges.
You can disable the use of escalated privileges by a DFC instance. This is controlled
by the dfc.privilege.enable key in the dfc.properties file.
The dfc.name property in the dfc.properties file controls the name of the DFC
instance.
Each installed DFC has an identity, with a unique identifier extracted from the PKI
credentials. The first time an installed DFC is initialized, it creates its PKI credentials
and publishes its identity to the global registry known to the DFC. In response, a
client registration object and a public key certificate object are created in the global
registry. The client registration object records the DFC instance identity. The public
key certificate object records the certificate used to verify that identity.
The PKI credentials for a DFC are stored by default in a file named dfc.keystore in
the same directory as the dfc.properties file. You can change the file location and
name if you want, by setting the dfc.security.keystore.file key in the dfc.
properties file.
The first time a DFC instance is initialized, it creates its own PKI credentials and
publishes its identity to the global registry. For subsequent startups, the DFC
instance checks for the presence of its credentials. If they are not found or are not
accessible-for instance, when a password has changed-the DFC re-creates the
credentials and republishes its identity to the global registry if privileged DFC is
enabled in the dfc.properties file. Republishing the credentials causes the creation
of another client registration object and public key certificate object for the DFC
instance. Deleting dfc.keystore causes the DFC instance to register again, and the
first registration becomes invalid. Re-creating the DFC credentials also invalidates
the existing client rights, and client rights objects must be created again for each
repository. The OpenText Documentum Administrator User Guide contains information
on creating client rights objects.
If DFC finds its credentials, the DFC may or may not check to determine if its
identity is established in the global registry. Whether that check occurs is controlled
by the dfc.verify_registration key in the dfc.properties file. That key is false by
default, which means that on subsequent initializations, DFC does not check its
identity in the global registry if the DFC finds its credentials.
A client rights object records the privileged roles that a DFC instance can invoke. It
also records the directory in which a copy of the instance public key certificate is
located. Client rights objects are created manually, using Documentum
Administrator, after installing the DFC instance. A client rights object must be
created in each repository in which the DFC instance exercises those roles. Creating
the client rights object automatically creates the public key certificate object in the
repository.
Client registration objects, client rights objects, and public key certificate objects in
the global registry and other repositories are persistent. Stopping the DFC instance
does not remove those objects. The objects must be removed manually if the DFC
instance associated with them is removed or if its identity changes.
If the client registration object for a DFC instance is removed from the global
registry, you can not register that DFC as a privileged DFC in another repository.
Existing registrations in repositories continue to be valid, but you can not register
the DFC in a new repository.
If the client rights objects are deleted from a repository but the DFC instance is not
removed, errors are generated when the DFC attempts to exercise an escalated
privilege or invoke a privileged module.
An encrypted file store storage area is a file store storage area that contains
encrypted content files. If you installed Documentum Server with a Trusted Content
Services license, you can designate any file store storage area as an encrypted file
store. The file store can be a standalone storage area or it can be a component of a
distributed store.
Note: If a distributed storage area has multiple file store components, the
components can be a mix of encrypted and unencrypted.
A file store storage area is designated as encrypted or unencrypted when you create
the storage area. You cannot change the encryption designation after you create the
area.
When you store content in an encrypted file store storage area, the encryption occurs
automatically. Content is encrypted by Documentum Server when the file is saved
to the storage area. The encryption is performed using a file store encryption key.
Each encrypted storage area has its own file store key. The key is encrypted and
stored in the crypto_key property of the storage area object (dm_filestore object). It
is encrypted using the repository encryption key.
Similarly, decryption occurs automatically when the content is fetched from the
storage area.
Encrypted content can be full-text indexed. However, the index itself is not
encrypted. If you are storing nonindexable content in an encrypted storage area and
indexing renditions of the content, the renditions are not encrypted unless you
designate their storage area as an encrypted storage area.
You can use dump and load operations on encrypted file stores if you include the
content files in the dump file.
Note: The encryption key is 192 bits in length and is used with the Triple DES-
EDE-CBC algorithm. For encryption algorithm AES, the supported key lengths
are 128-bit, 192-bit or 256-bit.
Digital shredding is supported for file store areas if they are standalone storage
areas. You can also enable shredding for file store storage areas that are the targets
of linked store storage areas. Shredding is not supported for these storage areas if
they are components of a distributed storage area.
Digital shredding is not supported for distributed storage areas, nor for the
underlying components. It is also not supported for blob, turbo, and external storage
areas.
SysObjects are the supertype, directly or indirectly, of all object types in the
hierarchy that can have content. SysObject properties store information about the
object version, the content file associated with the object, security permissions on the
object, and other important information.
You can use a document object to represent an entire document or only a portion of
a document. For example, a document can contain text, graphics, or tables.
• simple document
A simple document is a document with one or more primary content files. Each
primary content file associated with a document is represented by a content
object in the repository. All content objects in a simple document have the same
file format.
• virtual document
A virtual document is a container for other document objects, structured in an
ordered hierarchy. The documents contained in a virtual document hierarchy can
be simple documents or other virtual documents. A virtual document can have
any number of component documents, nested to any level.
Using virtual documents allows you to combine documents with a variety of
formats into one document. You can also use the same document in more than
one parent document. For example, you can place a graphic in a simple
document and then add that document as a component to multiple virtual
documents.
“Virtual documents“ on page 151, describes virtual documents.
Documentum Server creates and manages content objects. The server automatically
creates a content object when you add a file to a document if that file is not already
represented by a content object in the repository. If the file already has a content
object in the repository, the server updates the parent_id property in the content
object. The parent_id property records the object IDs of all documents to which the
content belongs.
Typically, there is only one content object for each content file in the repository.
However, if you have a Content Storage Services license, you can configure the use
of content duplication checking and prevention. This feature is used primarily to
ensure that numerous copies of duplicate content, such as an email attachment, are
not saved into the storage area. Instead, one copy is saved and multiple content
objects are created, one for each recipient.
Each primary content file in a document has a page number. The page number is
recorded in the page attribute of the file's content object. This is a repeating attribute.
If the content file is part of multiple documents, the attribute has a value for each
document. The file can be a different page in each document.
7.2.3 Renditions
A rendition is a representation of a document that differs from the original
document only in its format or some aspect of the format. The first time you add a
content file to a document, you specify the content file format. This format
represents the primary format of the document. You can create renditions of that
content using converters supported by Documentum Server or through
Documentum CTS Media, an optional product that handles rich media formats such
as jpeg and audio and video formats.
Page numbers are used to identify the primary content that is the source of a
rendition.
Some of the converters are supplied with Documentum Server, while others must be
purchased separately. You can use a converter that you have written, or one that is
not on the current list of supported converters.
When you ask for a rendition that uses one of the converters, Documentum Server
saves and manages the rendition automatically.
• A thumbnail rendition
• A default rendition that is specific to the primary content format
A rendition format can be the same format as the primary content page with which
the rendition is associated. However, in such cases, you must assign a page modifier
to the rendition, to distinguish it from the primary content page file. You can also
create multiple renditions in the same format for a particular primary content page.
Page modifiers are also used in that situation to distinguish among the renditions.
Page modifiers are user-defined strings, assigned when the rendition is added to the
primary content.
modify or delete the installed formats or add new formats. The OpenText
Documentum Server Administration and Configuration Guide contains instructions on
obtaining a list of formats and how to modify or add a format.
Each time you add a content file to an object, Documentum Server records the
content's format in a set of properties in the content object for the file. This internal
information includes:
• Resolution characteristics
• Encapsulation characteristics
• Transformation loss characteristics
This information, put together, gives a full format specification for the rendition. It
describes the format's screen resolution, any encoding the data has undergone, and
the transformation path taken to achieve that format.
with a resolution of 300 pixels per inch and 24-bits of color on a low-resolution
(72 pixels per inch) black and white monitor. Transforming the GIF file to display
on the monitor results in a loss of resolution.
• User-generated renditions
At times you may want to use a rendition that cannot be generated by
Documentum Server. In such cases, you can create the file outside of
Documentum and add it to the document using an addRendition method in the
IDfSysObject interface.
To remove a rendition, use a removeRendition method. You must have at least
Write permission on the document to remove a rendition of a document.
• filtrix
• pmbplus
• pdf2text
• psify
• scripts
• soundkit
• troff
You can also purchase and install document converters. Documentum provides
demonstration versions of Filtrix converters, which transform structured documents
from one word processing format to another. The Filtrix converters are located in the
You can also purchase and install Frame converters from Adobe Systems Inc. If you
install the Frame converters in the Documentum Server bin path, the converters are
incorporated automatically when you start the Documentum system. The server
assumes that the conversion package is found in the Linux bin path of the server
account and that this account has the FMHOME environment variable set to the
FrameMaker home.
To transform images, the server uses the PBMPLUS package available in the public
domain. PBMPLUS is a toolkit that converts images from one format to another. This
package has four parts:
The parts are upwardly compatible. PGM reads both PBM and PGM and writes
PGM. PPM reads PBM, PGM, and PPM, and writes PPM. PNM reads all three and,
in most cases, writes the same type as it read. That is, if it reads PPM, it writes PPM.
If PNM does convert a format to a higher format, it issues a message to inform you
of the conversion.
The following table lists the acceptable input formats for PBMPLUS.
The following table lists the acceptable output formats for the PBMPLUS package.
The following table lists the acceptable input formats for Linux conversion utilities.
The following table lists the acceptable output formats for Linux conversion utilities.
Renditions created by the media server can be connected to their source either
through a content object or using a relation object. The object used depends on how
the source content file is transformed. If the rendition is connected using a relation
object, the rendition is stored in the repository as a document whose content is the
rendition content file. The document is connected to its source through the relation
object.
7.2.4 Translations
Documentum Server contains support for managing translations of original
documents using relationships.
7.3 Versioning
Documentum Server provides comprehensive versioning services for all SysObjects
except folders and cabinets and their subtypes. Folder and cabinet SysObject
subtypes cannot be versioned.
Version labels are used to uniquely identify a version within a version tree. There
are several kinds of labels.
Note: If you set the numeric version label manually the first time you check
in an object, you can set it to any number you wish, in the format n.n, where
n is zero or any integer value.
• Symbolic version labels
A symbolic version label is either system- or user-defined. Using symbolic
version labels lets you provide labels that are meaningful to applications and the
work environment.
Symbolic labels are stored starting in the second position (r_version_label[1]) in
the r_version_label property. To define a symbolic label, define it in the
argument list when you check in or save the document.
An alternative way to define a symbolic label is to use an IDfSysObject.mark
method. A mark method assigns one or more symbolic labels to any version of a
document. For example, you can use a mark method, in conjunction with an
unmark method, to move a symbolic label from one document version to
another.
A document can have any number of symbolic version labels. Symbolic labels are
case sensitive and must be unique within a version tree.
• The CURRENT label
The symbolic label CURRENT is the only symbolic label that the server can
assign to a document automatically. When you check in a document, the server
assigns CURRENT to the new version, unless you specify a label. If you specify a
label (either symbolic or implicit), then you must also explicitly assign the label
CURRENT to the document if you want the new version to carry the CURRENT
label. For example, the following checkin call assigns the labels inprint and
CURRENT to the new version of the document being checked in:
If you remove a version that carries the CURRENT label, the server automatically
reassigns the label to the parent of the removed version.
Because both numeric and symbolic version labels are used to access a version of a
document, Documentum Server ensures that the labels are unique across all versions
of the document. The server enforces unique numeric version labels by always
generating an incremental and unique sequence number for the labels.
Note: Symbolic labels are case sensitive. Two symbolic labels are not
considered the same if their cases differ, even if the word is the same. For
example, the labels working and Working are not the same.
To identify which version tree a document belongs to, the server uses the document
i_chronicle_id property value. This property contains the object ID of the original
version of the document root of the version tree. Each time you create a new version,
the server copies the i_chronicle_id value to the new document object. If a document
is the original object, the values of r_object_id and i_chronicle_id are the same.
To identify the place of a document on a version tree, the server uses the document
numeric version label.
7.3.3 Branching
A version tree is often a linear sequence of versions arising from one document.
However, you can also create branches. Figure 7-1 shows a version tree that contains
branches.
The numeric version labels on versions in branches always have two more digits
than the version at the origin of the branch. For example, looking at the preceding
figure, version 1.3 is the origin of two branches. These branches begin with the
numeric version labels 1.3.1.0 and 1.3.2.0. If a branch off version 1.3.1.2 were created,
the number of its first version would be 1.3.1.2.1.0.
Branching takes place automatically when you check out and then check back in an
older version of a document because the subsequent linear versions of the document
already exist and the server cannot overwrite a previously existing version. You can
also create a branch by using the IDfSysObject.branch method instead of the
checkout method when you get the document from the repository.
When you use a branch method, the server copies the specified document and gives
the copy a branched version number. The method returns the IDfID object
representing the new version. The parent of the new branch is marked immutable
(unchangeable).
After you branch a document version, you can make changes to it and then check it
in or save it. If you use a checkin method, you create a subsequent version of your
branched document. If you use a save method, you overwrite the version created by
the branch method.
A branch method is particularly helpful if you want to check out a locked document.
With a prune method, you can prune an entire version tree or only a portion of the
tree. By default, prune removes any version that does not belong to a virtual
document and does not have a symbolic label.
To prune an entire version tree, identify the first version of the object in the method
arguments. The object ID of the first version of an object is found in the
i_chronicle_id property of each subsequent version. Query this property if you need
to obtain the object ID of the first version of an object.
To prune only part of the version tree, specify the object ID of the version at the
beginning of the portion you want to prune. For example, to prune the entire tree,
specify the object ID for version 1.0. To prune only version 1.3 and its branches,
specify the object ID for version 1.3.
You can also use an optional argument to direct the method to remove versions that
have symbolic labels. If the operation removes the version that carries the symbolic
label CURRENT, the label is automatically reassigned to the parent of the removed
version.
When you prune, the system does not renumber the versions that remain on the tree.
The system simply sets the i_antecedent_id property of any remaining version to the
appropriate parent.
For example, look at the following figure. Suppose the version tree shown on the left
is pruned, beginning the pruning with version 1.2 and that versions with symbolic
labels are not removed. The result of this operation is shown on the right. Notice
that the remaining versions have not been renumbered.
• 1.3
• 1.3.1.2
• 1.3.2.1
• 1.1.1.1
The other versions are immutable. However, you can create new, branched versions
of immutable versions.
7.4 Immutability
Immutability is a characteristic that defines an object as unchangeable. An object is
marked immutable if one of the following occurs:
In previous releases, you can only mark documents immutable. Starting with
Documentum 7.0, you can apply immutability rules to folders. To enable this feature
When you freeze an object, the server sets the following properties of the object to
TRUE:
• r_immutable_flag
This property indicates whether the object is changeable. If set to TRUE, you
cannot change the object content, primary storage location, or most of its
properties.
• r_frozen_flag
This property indicates whether the r_immutable_flag property was set to TRUE
by an explicit freeze method call.
If the object is a virtual document, the method sets additional properties and offers
the option of freezing the components of any snapshot associated with the object.
“Freezing a document” on page 163, describes the additional attributes that are set
when a virtual document is frozen.
• a_archive
• i_isdeleted
• i_vstamp
• r_access_date
• r_alias_set_id
• r_aspect_name
• r_current_state
• r_frozen_flag
• r_frzn_assembly_cnt
• r_policy_id
• r_immutable_flag
• i_reference_cnt
• r_policy_id
• r_resume_state
A data dictionary attribute defined for the dm_dd_info type provides additional
control over immutability for objects of type dm_sysobject or any subtypes of
SysObject. The attribute is called ignore_immutable. When set to TRUE for a
SysObject-type attribute, the attribute is changeable even if the r_immutable_flag for
the containing object instance is set to TRUE.
The OpenText Documentum Server DQL Reference Guide contains instructions for using
the ALTER TYPE statement to set or change data dictionary attributes.
• Database-level locking
• Repository-level locking
• Optimistic locking
A system administrator or superuser can lock any object with a database-level lock.
Other users must have at least Write permission on an object to place a database lock
on the object. Database locks are set using the IDfPersistentObject.lock method.
Database locks provide a way to ensure that deadlock does not occur in explicit
transactions and that save operations do not fail due to version mismatch errors.
If you use database locks, using repository locks is not required unless you want to
version an object. If you do want to version a modified object, you must place a
repository-level lock on the object also.
To use a checkout method, you must have at least Version permission for the object
or have superuser privileges.
If you use a save method to save your changes, you can choose to keep or relinquish
the repository lock on the object. Save methods, which overwrite the current version
of an object with the changes you made, have an argument that allows you to direct
the server to hold the repository lock.
When you fetch an object, the server notes the value in the object i_vstamp attribute.
This value indicates the number of committed transactions that have modified the
object. When you are finished working and save the object, the server checks the
current value of the object i_vstamp property against the value that it noted when
you fetched the object. If someone else fetched (or checked out) and saved the object
while you were working, the two values will not match and the server does not
allow you to save the object.
Additionally, you cannot save a fetched object if someone else checks out the object
while you are working on it. The checkout places a repository lock on the object.
• There are a small number of users on the system, creating little or no contention
for desired objects.
• There are only a small number of noncontent-related changes to be made to the
object.
• Retention policies
Retention policies are part of the larger retention services provided by Retention
Policy Services. These services allow you to manage the entire life of a document,
including its disposition after the retention period expires. Consequently,
documents associated with an active retention policy are not automatically
deleted when the retention period expires. Instead, they are held in the
repository until you impose a formal disposition or use a privileged delete to
remove them.
Using retention policies requires a Retention Policy Services license. If
Documentum Server is installed with that license, you can define and apply
retention policies through Retention Policy Services Administrator (an
administration tool that is similar to, but separate from, Documentum
Administrator). Retention policies can be applied to documents in any storage
area type.
Using retention policies is the recommended way to manage document retention.
• Content-addressed storage area retention periods
If you are using content-addressed storage areas, you can configure the storage
area to enforce a retention period on all content files stored in that storage area.
The period is either explicitly specified by the user when saving the associated
document or applied as a default by the Centera host system.
If the retention policy is a conditional policy, the retention period is not applied to
the object until the event occurs. Until that time, the object is held under an infinite
retention (that is, the object is retained indefinitely). After the event occurs, the
retention period defined in the policy is applied to the object. For example, suppose
a conditional retention policy requires employment records to be held for 10 years
after an employee leaves a company. This conditional policy is attached to all
employment records. The records of any employee are retained indefinitely until the
employee leaves the company. At that time, the conditional policy takes effect and
the employee records are marked for retention for 10 years from the date of
termination.
You can apply multiple retention policies to an object. In general, the policies can be
applied at any time to the object.
A policy can be created for a single object, a virtual document, or a container such as
a folder. If the policy is created for a container, all the objects in the container are
under the control of the policy.
An object can be assigned to a retention policy by any user with Read permission on
the object or any user who is a member of either the dm_retention_managers group
or the dm_retention_users group. These groups are created when Documentum
Server is installed. They have no default members.
Policies apply only to the specific version of the document or object to which they
are applied. If the document is versioned or copied, the new versions or copies are
not controlled by the policy unless the policy is explicitly applied to them. Similarly,
if a document under the control of a retention policy is replicated, the replica is not
controlled by the policy. Replicas may not be associated with a retention policy.
Similarly, the property i_retain_until is set to the date furthest in the future. For
example, suppose a document created on April 1, 2005 is stored in a content-
addressed storage area and assigned to a retention policy. The retention policy
specifies that it must be held for five years. The expiration date for the policy is May
31, 2010. The content-addressed storage area has a default retention period of eight
years. The expiration date for the storage-based retention period is May 31, 2013.
Documentum Server will not allow the document to be deleted (without using a
forced deletion) until May 31, 2013. The i_retain_until property is set to May 31,
2013.
If the retention policy is a conditional retention policy, the property value is ignored
until the event occurs and the condition is triggered. At that time, the property is set
to the retention value defined by the conditional policy. If multiple conditional
retention policies apply, the property is updated as each is triggered if the triggered
policy retention period is further in the future than the value already recorded in
i_retain_until. However, Documentum Server ignores the value in i_retain_until all
the policies are triggered. Until all conditional policies are triggered, the object is
held in infinite retention.
• Privileged deletions
Use a privileged deletion to remove documents associated with an active
retention policy. Privileged deletions succeed if the document is not subject to
any holds imposed through the Retention Policy Manager. You must be a
member of the dm_retention_managers group and have Superuser privileges to
perform a privileged deletion.
• Forced deletions
Forced deletions remove content with unexpired retention periods from
retention-enabled content-addressed storage areas. You must be a superuser or a
member of the dm_retention_managers group to perform a forced deletion.
The force delete request must be accompanied by a Centera profile that gives the
requesting user the Centera privileges needed to perform a privileged deletion
on the Centera host system. The Centera profile must be defined prior to the
request. For information about defining a profile, contact the Centera system
administrator at your site.
A forced deletion removes the document from the repository. If the content is not
associated with any other documents, a forced deletion also removes the content
object and associated content file immediately. If the content file is associated
with other SysObjects, the content object is simply updated to remove the
reference to the deleted document. The content file is not removed from the
storage area.
Similarly, if the content file is referenced by more than one content object, the file
is not removed from the storage area. Only the document and the content object
that connects that document to the content file are removed.
After you create a document, you can attach it to any lifecycle that is valid for the
document object type. Only a user with the Change State extended permission can
move the document from one state to another.
You can turn off indexing of object content or properties in several ways:
• Set the property a_full_text of an object type to false. The properties are indexed
but not the content. You must have Sysadmin or Superuser privileges to change
the value to F.
• Set enable indexing to false in Documentum Administrator to turn off indexing
events for specific object types. Properties are indexed.
• Turn off indexing for specific formats by setting the can_index property to false.
Properties are indexed.
• Use xPlore index agent filters to filter out content and metadata for specific types
or repository paths.
The owner_name property identifies the user or group who owns an object.
By default, an object is owned by the user who creates the object. However, you can
assign ownership to another user or a group by setting the owner_name property.
To change the object owner, you must be a superuser, the current owner of the
object, or a user with Change Owner permission.
The default_folder property records the name of the primary location for an object.
The primary location is the repository cabinet or folder in which the server stores a
new object the first time the object is saved into the repository. Although this
location is sometimes referred to as the primary cabinet for the object, it can be either
a cabinet or a folder.
The home cabinet of a user is the default primary location for a new document (or
any other SysObject) a user creates. It is possible to specify a different location
programmatically by setting the default_folder property or linking the object to a
different location.
After you define a primary location for a object, it is not necessary to define the
location again each time you save the object.
Content can be a file or a block of data in memory. The method used to add the
content to the object depends on whether the content is a file or data block.
The first content file added to an object determines the primary format for the object.
The format is set and recorded in the a_content_type property of the object.
Thereafter, all content added to the object as primary content must have the same
format as that first primary content file.
Note: If you discover that the a_content_type property is set incorrectly for an
object, it is not necessary to re-add the content. You can check out the object,
reset the property, and save (or check in) the object.
After you create content, you can add more content by appending a new file to
the end of the the object, or you can insert the file into the list.
The content can be a file or a block of data, but it must reside on the same
machine as the client application.
Renditions are typically copies of the primary content in a different format. You can
add as many renditions of primary content as needed.
You cannot use DQL to add a file created on a Macintosh machine to an object. You
must use a DFC method. Older Macintosh-created files have two parts: a data fork
(the actual text of the file) and a resource fork. The DFC, in the IDfSysObject
interface, includes methods that allow you to specify both the content file and its
resource fork when adding content to a document.
This section provides an overview of the ways in which the storage location for a
content file is determined.
Content assignment policies let you fully automate assigning content to file stores
and content-addressed storage areas.
Content assignment polices can only assign content to file store storage areas or
content-addressed storage areas. Policies are enforced by DFC-based client
applications (5.2.5 SP2 and later), and are applied to all new content files, whether
created by a save or import into the repository or a checkin operation.
When a content file is saved to content-addressed storage, the metadata values are
stored first in the content object and then copied into the storage area. Only those
metadata fields that are defined in both the content object and the ca store object are
copied to the storage area.
In the content object, the properties that record the metadata are:
• content_attr_name
• content_attr_value
• content_attr_data_type
• content_attr_num_ value
• content_attr_date_ value
• a_content_attr_name
This is a list of the metadata fields in the storage area to be set.
• a_retention_attr_name
This identifies the metadata field that contains the retention period value.
When setContentAttrs executes, the metadata name and value pairs are stored first
in the content object properties. Then, the plug-in library is called to copy them from
the content object to the storage system metadata fields. Only those fields that are
identified in both content_attr_name in the content object and in either
a_content_attr_name or a_retention_attr_name in the storage object are copied to the
storage area.
The value for the metadata field identified in a_retention_attr_name can be a date, a
number, or a string. For example, suppose the field name is “retain_date” and
content must be retained in storage until January 1, 2016. The setContentAttrs
parameter argument would include the following name and value pair:
'retain_date=DATE(01/01/2016)'
You can specify the date value using any valid input format that does not require a
pattern specification. Do not enclose the date value in single quotes.
For example, the following sets the retention period to 1 day (24 hours):
'retain_date=FLOAT(86400)'
The string value must be numeric characters that Documentum Server can interpret
as a number of seconds. If you include characters that cannot be translated to a
number of seconds, Documentum Server sets the retention period to 0 by default,
but does not report an error.
The Setcontentattrs method must be executed after the content is added to the
SysObject and before the object is saved to the repository. The
SET_CONTENT_ATTRS and PUSH_CONTENT_ATTRS methods must be executed
after the object is saved to the repository.
When a content file is saved to S3-compatible storage, the metadata values are stored
first in the content object and then copied into the storage area.
In the content object, the properties that record the metadata are:
• content_attr_name
• content_attr_value
• content_attr_data_type
• content_attr_num_ value
• content_attr_date_ value
When the setContentAttrs method is executed, the metadata name and value pairs
are stored first in the content object properties. After the PUSH_CONTENT_ATTRS
method reads the content object attributes, it collects all user metadata available in
the corresponding content object and prepends “X-AMZ-metadata-” with the
collected metadata. Documentum Server then generates the PUT request containing
headers with “X-AMZ-metadata-*” and sends the request to S3-compatible store.
The Setcontentattrs method must be executed after the content is added to the
SysObject and before the object is saved to the repository. The
SET_CONTENT_ATTRS and PUSH_CONTENT_ATTRS methods must be executed
after the object is saved to the repository.
Each object of type SysObject or SysObject subtype has one ACL that controls access
to that object. The server automatically assigns a default ACL to a new SysObject if
you do not explicitly assign an ACL to the object when you create it. If a new object
is stored in a room (a secure area in a repository) and is governed by that room, the
ACL assigned to the object is the default ACL for that room.
The ACL associated with an object is identified by two properties of the SysObject:
acl_name and acl_domain. The acl_name is the name of the ACL and acl_domain
records the owner of the ACL.
– The implementation and use of the options for determining where content is
stored
– The behavior and implementation of content assignment policies and creating
them
– How the default storage algorithm behaves
– Configuring a storage area to require a retention period for content stored in
that area
Object-level permissions are defined in ACLs. Each SysObject has an associated ACL
object that defines the access permissions for that object. The entries in the ACL
define who can access the object and the operations allowed for those having access.
Users with Superuser privileges can always access a SysObject because a superuser
always has at least Read permission on SysObjects and has the ability to modify
ACLs.
If the object is under the control of a retention policy, users cannot overwrite the
content regardless of their permissions. Documents controlled by a retention policy
may only be versioned or copied. Additionally, some retention policies set
documents under their control as immutable. In that case, users can change only
some of the document attributes.
You cannot modify the content of objects that are included in a frozen
(unchangeable) snapshot or that have the r_immutable_flag attribute set to TRUE.
Similarly, most attributes of such objects are also unchangeable.
• A lock method
• A checkOut method
• A fetch method
These methods retrieve the object metadata from the repository. Retrieving the
object content requires a separate method. However, you must execute a lock,
checkOut, or fetch before retrieving the content files.
Checking out a document places a repository lock on the object. A repository lock
ensures that while you are working on a document, no other user can make changes
to that document. Checking out a document also offers you two alternatives for
saving the document when you are done. You need Version or Write permission to
check out a document.
Use a fetch method when you want to read but not change an object. The method
does not place either a repository or database lock on the object. Instead, the method
uses optimistic locking. Optimistic locking does not restrict access to the object, and
only guarantees that one user cannot overwrite the changes made by another.
Consequently, it is possible to fetch a document, make changes, and not be able to
save those changes. In a multiuser environment, it is generally best to use the fetch
method only to read documents or if the changes you want to make will take a very
short time.
To use fetch, you need at least Read permission to the document. With Write
permission, you can use a fetch method in combination with a save method to
change and save a document version.
After you have checked out or fetched the document, you can change the attributes
of the document object or add, replace, or remove primary content. To change the
object current primary content, retrieve the content file first.
In DFC, most attributes have a specific set method that sets the attribute. For
example, if you wanted to set the subject attribute of a document, you call the
setSubject method. There is also a generic set method that you can use to set any
attribute.
When you add a value, you can append it to the end of the values in the repeating
property or you can replace an existing value. If you remove a value, all the values
at higher index positions within the property are adjusted to eliminate the space left
by the deleted value. For example, suppose a keywords property has 4 values:
keywords[0]=engineering
keywords[1]=productX
keywords[2]=metal
keywords[3]=piping
If you removed productX, the values for metal and piping are moved up and the
keywords property now contains the following:
keywords[0]=engineering
keywords[1]=metal
keywords[2]=piping
The page number must be the next number in the object sequence of page numbers.
Page numbers begin with zero and increment by one. For example, if a document
has three primary content files, they are numbered 0, 1, and 2. If you add another
primary content file, you must assign it page number 3.
If you fail to include a page number, the server assumes the default page number,
which is 0. Instead of adding the file to the existing content list, it replaces the
content file previously in the 0 position.
Whichever method you use, you must identify the page number of the file you want
to replace in the method call. For example, suppose you want to replace the current
table of contents file in a document referenced as mySysObject and the current table
of contents file is page number 2. The following call replaces that file in the object
“mySysObject”:
mySysObject.insertFile("toc_new",2)
You cannot remove a content page if the content has a rendition with the keep flag
set to true and the page is not the last remaining page in the document.
However, all objects that share the content must have the same value in their
a_content_type attributes. If an object to which you are binding the content has no
current primary content, the bindFile method sets the target document
a_content_type attribute to the format of the content file.
Regardless of how many objects share the content file, the file has one content object
in the repository. The documents that share the content file are recorded in the
parent_id attribute of the content object.
• checkin
• checkinEx
• save
• saveLock
The checkinEx method is specifically for use in applications. It has four arguments
an application can use for its specific needs. Refer to the Javadocs for details.
If the document has been signed using addESignature, using save to overwrite the
signed version invalidates the signatures and will prohibit the addition of signatures
on future versions.
Template ACLs are used to make applications, workflows, and lifecycles portable.
For example, an application that uses a template ACL could be used by a variety of
departments within an enterprise because the users or groups within the ACL
entries are not defined until the ACL is assigned to an actual document.
A custom ACL name is created by the server and always begins with dm_.
Generally, a custom ACL is only assigned to one object. However, a custom ACL can
be assigned to multiple objects.
If the object is moved out of the room, Documentum Server removes the default
room ACL and assigns a new ACL:
• If the user moving the object out of the room is the object owner, Documentum
Server assigns the default ACL defined in the repository configuration to the
object.
• If the user moving the object out of the room is not the object owner,
Documentum Server assigns the object owner default ACL to the object.
You must be the owner of the object, a superuser, or have Change Permit permission
to change the entries in an object's ACL.
• AccessRestriction or ExtendedRestriction
• RequiredGroup or RequiredGroupSet
• ApplicationPermit or ApplicationRestriction
When you remove user access or extended permissions, you can either:
If the user or group has access through another entry, the user or group retains that
access permission. For example, suppose janek has access as an individual and also
as a member of the group engr in a particular ACL. If you issue a revokePermit
method for janek against that ACL, you remove only janek's individual access. The
access level granted through the engr group is retained.
Similar to other users, applications can also work with remote objects. After an
application opens a session with a repository, it can work with remote objects by
opening a session with the remote repository or by working with the mirror or
replica object in the current repository that refers to the remote object. Mirror objects
and replica objects are implemented as reference links.
Mirror objects only include the original object attribute data. When the system
creates a mirror object, it does not copy the object content to the local repository.
Replicas are copies of an object. Replicas are generated by object replication jobs. A
replication job copies objects in one repository to another. The copies in the target
repository are called replicas.
A relation object identifies the two objects involved in the relationship and the type
of relationship. Relation objects also have some attributes that you can use to
manage and manipulate the relationship.
• The OpenText Documentum Server System Object Reference Guide has information
about relationships, including instructions for creating relationship types and
relationships between objects.
• “Managing translations” on page 148, describes how the system-defined
translation relationship can be used.
• “Annotation relationships” on page 149, describes annotations and how to work
with them.
The language_code attribute allows you identify the language in which the content
of a document is written and the document country of origin. Setting this attribute
will allow you to query for documents based on their language. For example, you
might want to find the German translation of a particular document or the original
of a Japanese translation.
When you define the child in the relationship, you can bind a specific version of the
child to relationship or bind the child by version label. To bind a specific version,
you set the child_id property of the dm_relation object to object ID of the child. To
bind by version label, you set the child_id attribute to the chronicle ID of the version
tree that contains the child, and the child_label to the version label of the translation.
The chronicle ID is the object ID of the first version on the version tree. For example,
if you want the APPROVED version of the translation to always be associated with
the original, set child_id to the translation chronicle ID and child_label to
APPROVED.
• The OpenText Documentum Server System Object Reference Guide has more
information about:
Annotations are implemented as note objects, which are a SysObject subtype. The
content file you associate with the note object contains the comments you want to
attach to the document. After the note object and content file are created and
associated with each other, you use the IDfNote.addNoteEx method to associate the
note with the document. A single document can have multiple annotations.
Conversely, a single annotation can be attached to multiple documents.
When you attach an annotation to a document, the server creates a relation object
that records and describes the relationship between the annotation and the
document. The relation object parent_id attribute contains the document object ID
and its child_id attribute contains the note object ID. The relation_name attribute
contains dm_annotation, which is the name of the relation type object that describes
the annotation relationship.
You can create, attach, detach, and delete annotations. The OpenText Documentum
Server Administration and Configuration Guide contains the instructions.
Note: The dm_clean utility automatically destroys note objects that are not
referenced by any relation object, that is, any that are not attached to at least
one object.
• Object replication:
If the replication mode is federated, then any annotations associated with a
replicated object are replicated also.
• The OpenText Documentum Server System Object Reference Guide has a complete
description of relation objects, relation type objects, and their attributes.
• The associated Javadocs have more information about the addNote and
removeNote methods in the IDfSysObject interface.
8.1 Overview
This section describes virtual documents, a feature supported by Documentum
Server that allows you to create documents with varying formats.
Users create virtual documents using the Virtual Document Manager, a graphical
user interface that allows them to build and modify virtual documents. However, if
you want to write an application that creates or modifies a virtual document with no
user interaction, you must use DFC.
The version of the child component is determined at the time the virtual document is
assembled. A virtual document is assembled when it is retrieved by a client, and
when a snapshot of the virtual document is created. The assembly is determined at
runtime by a binding algorithm governed by metadata set on the dmr_containment
objects.
For some content types, such as Microsoft Word files and XML files used in XML
applications, virtual documents are patched as they are retrieved to a client, and
flattened into a single document. In other cases, the individual components of the
virtual documents are retrieved as separate files.
8.1.2 Implementation
This section briefly describes how virtual documents are implemented within the
Documentum system.
The components of a virtual document are associated with the containing document
by containment objects. Containment objects contain information about the
components of a virtual document. Each time you add a component to a virtual
document, a containment object is created for that component. Containment objects
store the information that links a component to a virtual document. For components
that are themselves virtual documents, the objects also store information that the
server uses when assembling the containing document.
You can associate a particular version of a component with the virtual document or
you can associate the entire component version tree with the virtual document.
Binding the entire version tree to the virtual document allows you to select which
version is included at the time you assemble the document. This feature provides
flexibility, letting you assemble the document based on conditions specified at
assembly time.
The components of a virtual document are ordered within the document. By default,
the order is managed by the server. The server automatically assigns order numbers
when you add or insert a component.
If you bypass the automatic numbering provided by the server, you can use your
own numbers. The insertPart, updatePart, and removePart methods allow you to
specify order numbers. However, if you define order numbers, you must also
perform the related management operations. The server does not manage user-
defined ordering numbers.
8.1.3 Versioning
You can version a virtual document and manage its versions just as you do a simple
document.
By default, Documentum Server does not allow you to remove an object from the
repository if the object belongs to a virtual document. This ensures that the
referential integrity of virtual documents is maintained. This behavior is controlled
by the compound_integrity property in the server config object of the server. By
default, this property is TRUE, which prohibits users from destroying any object
contained in a virtual document.
If you set this property to FALSE, users can destroy components of unfrozen virtual
documents. However, users can never destroy components of frozen virtual
documents, regardless of the setting of compound_integrity.
• Conditional assembly:
Assembling a virtual document selects a set of the document components for
publication or some other operation, such as viewing or copying. Conditional
assembly lets you identify which components to include. You can include all the
components or only some of them. If a component version tree is bound to the
virtual document, you can choose not only whether to include the component in
the document but also which version of the component to include.
If a selected component is also a virtual document, the component descendants
can also be included. Whether descendants are included is controlled by two
properties in the containment objects.
• Snapshots:
Snapshots provide a way of persistently storing the results of virtual document
assembly. The snapshot records the exact components of the virtual document at
the time the snapshot was created, using version-specific object identities to
represent each node.
Snapshots are stored in the repository as a set of assembly objects (dm_assembly)
associated with a dm_sysobject. Each assembly object in a snapshot represents
one node of the virtual document, and connects a parent document with a
specific version of a child document.
The following figure illustrates assembly relationships:
The connection between the parent and the components is defined in two properties
of containment objects: a_contain_type and a_contain_desc. DFC uses the
a_contain_type property to indicate whether the reference is an entity or link. It uses
the a_contain_desc to record the actual identification string for the child.
These two properties are also defined for the dm_assembly type, so applications can
correctly create and handle virtual document snapshots using the DFC.
To reference other documents linked to the parent document, you can use
relationships of type xml_link.
Virtual documents with XML content are managed by XML applications, which
define rules for handling and chunking the XML content.
• “Snapshots” on page 161, has information about creating and working with
snapshots.
• “Virtual document assembly and binding” on page 156, describes how early and
late binding work.
• In early binding, the binding label is set on the containment object when the node
is created and stored persistently. The binding label is stored in the version_label
property of the dmr_containment object.
• In late binding, the version of the node is determined at the time the virtual
document is assembled, using a “preferred version” or late binding label passed
at runtime. If the version_label property of the dmr_containment object is empty
or null, then the node is late bound.
The logic that controls the assembly of the virtual document at the time it is
retrieved is determined by settings on the containment objects. The following table
describes the binding logic:
The following diagram shows the decision process when assembling a virtual
document node.
8.3.1 use_node_ver_label
The use_node_ver_label property determines how the server selects late-bound
descendants of an early-bound component.
Late bound components that have no early bound parent or that have an early
bound parent with use_node_ver_label set to FALSE are chosen by the binding
conditions specified in the SELECT statement.
Figure 8-4 illustrates how use_node_ver_label works. In the figure, each component
is labeled as early or late bound. For the early bound components, the version label
specified when the component was added to the virtual document is shown.
Assume that all the components in the virtual document have use_node_ver_label
set to TRUE.
Component B is early bound-the specified version is the one carrying the approved
version label . Because Component B is early bound and use_node_ver_label is set to
TRUE, when the server determines which versions of the Component B late bound
descendants to include, it will choose the versions that have the approved symbolic
version label. In our sample virtual document, Component E is a late-bound
descendant of Component B. The server will pick the approved version of
Component E for inclusion in the virtual document.
Descending down the hierarchy, when the server resolves the Component E late
bound descendant, Component F, it again chooses the version that carries the
approved version label. All late-bound descendant components are resolved using
the version label associated with the early-bound parent node until another early
bound component is encountered with use_node_ver_label set to TRUE.
Component C, although late bound, has no early bound parent. For this component,
the server uses the binding condition specified in the IN DOCUMENT clause to
determine which version to include. If the IN DOCUMENT clause does not include
a binding condition, the server chooses the version carrying the CURRENT label.
8.3.2 follow_assembly
The follow_assembly property determines whether the server selects component
descendants using the containment objects or a snapshot associated with the
component.
If you set follow_assembly to TRUE, the server selects component descendants from
the snapshot associated with the component. If follow_assembly is TRUE and a
component has a snapshot, the server ignores any binding conditions specified in
the SELECT statement or mandated by the use_node_ver_label property.
• 0, which means that the copy or reference choice is made by the user or
application when the copy operation is requested
• 1, which directs the server to create a pointer or reference to the component
• 2, which directs the server to copy the component
Whether the component is copied or referenced, a new containment object for the
component linking the component to the new copy of the virtual document is
created.
Regardless of which option is used, when users open the new copy in the Virtual
Document Manager, all document components are visible and available for editing
or viewing, subject to user access permissions.
8.5 Snapshots
A snapshot is a record of the virtual document as it existed at the time you created
the snapshot. Snapshots are a useful shortcut if you often assemble a particular
subset of virtual document components. Creating a snapshot of that subset of
components lets you assemble the set quickly and easily.
Only one snapshot can be assigned to each version of a virtual document. If you
want to define more than one snapshot for a virtual document, you must assign the
additional snapshots to other documents created specifically for the purpose.
Any modification that affects a snapshot requires at least Version permission on the
virtual document for which the snapshot was defined.
You can add components that are not actually part of the virtual document to the
document snapshot. However, doing so does not add the component to the virtual
document in the repository. That is, the virtual document r_link_cnt property is not
incremented and a containment object is not created for the component.
To delete a single assembly object or several assembly objects, use a destroy method.
Do not use destroy to delete each object individually in an attempt to delete the
snapshot.
Issuing the freeze method automatically freezes the target virtual document.
Freezing the associated snapshot is optional. If the document has multiple
snapshots, only the snapshot actually associated with the virtual document itself can
be frozen. (The other snapshots, associated with simple documents, are not frozen.)
If you want to freeze only the snapshot, you must freeze both the virtual document
and the snapshot and then explicitly unfreeze the virtual document.
Users are allowed to modify any components of the virtual document that are not
part of the frozen snapshot. Although users cannot remove those components from
the document, they can change the component content files or properties.
• r_immutable_flag
This property indicates that the document is unchangeable.
• r_frozen_flag
This property indicates that the r_immutable_flag was set by a freeze method
(instead of a checkin method).
Freezing a snapshot sets the following properties for each component in the
snapshot:
• r_immutable_flag
• r_frzn_assembly_cnt
The r_frzn_assembly count property contains a count of the number of frozen
snapshots that contain this component. If this property is greater than zero, you
cannot delete or modify the object.
• r_immutable_flag
If the r_immutable_flag was set by versioning prior to freezing the document,
then unfreezing the document does not set this property to FALSE. The
document remains unchangeable even though it is unfrozen.
• r_frozen_flag
If you chose to unfreeze the document snapshot, the server also sets the
r_has_frzn_assembly property to FALSE.
Unfreezing a snapshot resets the following properties for each component in the
snapshot:
• r_immutable_flag
This is set to FALSE unless it was set to TRUE by versioning prior to freezing the
snapshot. In such cases, unfreezing the snapshot does not reset this property.
• r_frzn_assembly_cnt
This property, which contains a count of the number of frozen snapshots that
contain this component, is decremented by 1.
9.1 Overview
A workflow is a sequence of activities that represents a business process, such as an
insurance claims procedure or an engineering development process. Workflows can
describe simple or complex business processes. Workflow activities can occur one
after another, with only one activity in progress at a time. A workflow can consist of
multiple activities all happening concurrently. A workflow might combine serial
and concurrent activity sequences. You can also create a cyclical workflow, in which
the completion of an activity restarts a previously completed activity.
9.1.1 Implementation
Workflows are implemented as two separate parts: a workflow definition and a
runtime instantiation of the definition.
When a user starts a workflow, the server uses the definition in the dm_process
object to create a runtime instance of the workflow. Runtime instances of a workflow
are stored in dm_workflow objects for the duration of the workflow. When an
activity starts, it is instantiated by setting properties in the workflow object. Running
activities may also generate work items and packages. Work items represent work to
be performed on the objects in the associated packages. Packages generally contain
one or more documents.
The following figure illustrates how the components of a workflow definition and
runtime instance work together.
Users can repeatedly perform the business process. It is based on a stored definition,
and the essential process is the same each time. Separating a workflow definition
from its runtime instantiation allows multiple workflows based on the same
definition to run concurrently.
For example, a typical business process for new documents has four steps: authoring
the document, reviewing it, revising it, and publishing the document. However, the
actual authors and reviewers of various documents will be different people. Rather
than creating a new workflow for each document with the authors and reviewers
names hard-coded into the workflow, create activity definitions for the basic steps
that use aliases for the authors and reviewers names and put those definitions in one
workflow definition. Depending on how you design the workflow, the actual values
represented by the aliases can be chosen by the workflow supervisor when the
workflow is started or later, by the server when the containing activity is started.
The additional features supported are called out and described in the appropriate
sections. However, complete descriptions of their use and implementation are found
in the Business Process Manager documentation.
The following sections provide some basic information about the components of a
definition.
Note: Structured data elements and correlation sets for a workflow may only
be defined using Business Processs Manager. Refer to that documentation for
more information about these features.
• Initiate
Initiate activities link to a Begin activity. These activities record how a workflow
may be started. For example, a workflow might have two Initiate activities, one
that allows the workflow to be started manually from Webtop, and one that
allows the workflow to be started by submitting a form. Initiate activities may
only be linked to Begin activities. Initiate activities may only be defined for a
workflow using Process Builder.
• Begin
Begin activities start the workflow. A process definition must have at least one
beginning activity.
• Step
Step activities are the intermediate activities between the beginning and the end.
A process definition can have any number of Step activities.
• End
An End activity is the last activity in the workflow. A process definition can have
only one ending activity.
• Exception
An exception activity is associated with an automatic activity, to provide fault-
handling functionality for the activity. Each automatic activity can have one
exception activity.
You can use activity definitions more than once in a workflow definition. For
example, suppose you want all documents to receive two reviews during the
development cycle. You might design a workflow with the following activities:
Write, Review1, Revise, Review2, and Publish. The Review1 and Review2 activities
can be the same activity definition.
An activity that can be used more than once is called a repeatable activity. Whether
an activity is repeatable is defined in the activity's definition.
A repeatable activity is an activity that can be used more than once in a particular
workflow. By default, activities are defined as repeatable activities.
In a process definition, the activities included in the definition are referenced by the
object IDs of the activity definitions. In a running workflow, activities are referenced
by the activity names specified in the process definition.
When you add an activity to a workflow definition, you must provide a name for the
activity that is unique among all activities in the workflow definition. The name you
give the activity in the process definition is stored in the r_act_name property. If the
activity is used only once in the workflow structure, you can use the name assigned
to the activity when the activity was defined (recorded in the activity's object_name
property). However, if the activity is used more than once in the workflow, you
must provide a unique name for each use.
9.2.1.2 Links
A link connects two activities in a workflow through their ports. A link connects an
output port of one activity to an input port of another activity. Think of a link as a
one-way bridge between two activities in a workflow.
An input port on a Begin activity participates in a link, but it can only connect to an
output port of an Initiate activity. Similarly, an output port of an Initiate activity
may only connect to an input port of a Begin activity.
The definition also includes a set of properties that define the ports for the activities,
the packages that each port can handle, and the structured data that is accessible to
the activity.
If the method executed by the activity is a Java method, you can configure the
activity so that the method is executed by the dm_bpm servlet. This is a Java servlet
dedicated to executing workflow methods. To configure the method to execute in
this servlet, you must set the a_special_app property of the method object to a
character string beginning with workflow. Additionally, the classfile of the Java
method must be in a location that is included in the classpath of the
dm_bpm_servlet.
A work queue can be chosen as an activity performer only if the workflow definition
was created in Process Builder.
When you create a workflow definition in either WFM or Process Builder, you can
set a priority for each activity in the workflow. The priority value is recorded in the
process definition and is only applied to automatic tasks. Documentum Server
ignores the value for manual tasks.
The workflow agent (the internal server facility that controls execution of automatic
activities) uses the priority values in r_act_priority to determine the order of
execution for automatic activities. When an automatic activity is instantiated,
Documentum Server sends a notification to the workflow agent. In response, the
agent queries the repository to obtain information about the activities ready for
execution. The query returns the activities in priority order, highest to lowest.
In Process Builder, you can set up work queues to automate the distribution of
manual tasks to appropriate performers. For more information about work queues,
refer to the Process Builder documentation or online Help. Every work item on a
work queue is governed by a work queue policy object. The work queue policy
defines how the item is handled on the queue. Among other things, the policy
defines the priority of the work items on the queue. Every work item on a work
queue is assigned a priority value at runtime, when the work item is generated.
The priority assigned by a work queue policy does not affect or interact with a
priority value assigned to an activity in the process definition. Work queue policies
are applied to manual activities, because only manual activities can be placed on a
work queue. The priority values in the process definition are used by Documentum
Server only for execution of automatic activities.
For more information about how the workqueue policy is handled at runtime, refer
to Process Builder documentation.
A definition in the draft state has not been validated since it was created or last
modified. A definition in the validated state has passed the server's validation
checks, which ensure that the definition is correctly defined. A definition in the
installed state is ready for use in an active workflow.
You cannot start a workflow from a process definition that is in the draft or
validated state. The process definition must be in the installed state. Similarly, you
cannot successfully install a process definition unless the activities it references are
in the installed state.
Delegation allows the server or the activity performer to delegate the work to
another performer. If delegation is allowed, it can occur automatically or be forced
manually.
Automatic delegation occurs when the server checks the availability of an activity
performer or performers and determines that the person or persons is not available.
When this happens, the server automatically delegates the work to the user
identified in the user_delegation property of the original performer user object.
work item is assigned to the workflow supervisor. If control_flag is set to 1, the work
item is reassigned to the original performer. The server does not attempt to delegate
the task again. In either case, the workflow supervisor receives a
DM_EVENT_WI_DELEGATE_F event.
9.2.2.4.1 Extension
Extension allows the activity performer to identify a second performer for the
activity after he or she completes the activity the first time. If extension is allowed,
when the original performers complete activity work items, they can identify a
second round of performers for the activity. The server will generate new work
items for the second round of performers. Only after the second round of performers
completes the work does the server evaluate the activity transition condition and
move to the next activity.
A work item can be extended only once. Programmatically, a work item is extended
by execution of an IDfWorkitem.repeat method.
If you choose to define the performer during the design phase, Process Builder
allows you to either name the performer directly for many categories or define a
series of conditions and associated performers. At runtime, the workflow engine
determines which condition is satisfied and selects the performer defined as the
choice for that condition.
There are multiple options when choosing a performer category. Some options are
supported for both manual and automatic activities. Others are only valid choices
for manual activities.
The text of a task subject message is recorded in the task_subject property of the
activity definition. The text can be up to 255 characters and can contain references to
the following object types and properties:
The format of the object type and property references must be:
{object_type_name.property_name}
The server uses the following rules when resolving the string:
• The server does not place quotes around resolved object type and property
references.
• If the referenced property is a repeating property, the server retrieves all values,
separating them with commas.
• If the constructed string is longer than 512 characters, the server truncates the
string.
• If an object type and property reference contains an error, for example, if the
object type or property does not exist, the server does not resolve the reference.
The unresolved reference appears in the message.
The resolved string is stored in the task_subject property of the associated task
queue item object. After the server has created the queue item, the value of the
task_subject property in the queue item will not change, even if the values in any
referenced properties change.
The trigger condition is the minimum number of input ports that must have
accepted packages. For example, if an activity has three input ports, you may decide
that the activity can start when two of the three have accepted packages.
A trigger event is an event queued to the workflow. The event can be a system-
defined event, such as dm_checkin, or you can make up an event name, such as
promoted or released. However, because you cannot register a workflow to receive
event notifications, the event must be explicitly queued to the workflow using an
IDfWorkflow.queue method.
• Input
An input port accepts a package as input for an activity. The package definitions
associated with an input port define what packages the activity accepts. Each
input port is connected through a link to an output port of a previous activity.
• Output
An output port sends a package from an activity to the next activity. The package
definitions associated with an output port define what packages the activity can
pass to the next activity or activities. Each output port is connected by a link to
an input port of a subsequent activity.
• Revert
A revert port is a special input port that accepts packages sent back from a
subsequent performer. A revert port is connected by a link to an output port of a
subsequent activity.
• Exception
An exception port is an output port that links an automatic activity to the input
port of an Exception activity. Exception ports do not participate in transitions.
The port is triggered only when the automatic activity fails. You must create the
workflow definition using Process Builder to define exception ports and
Exception activities.
Each port must have at least one associated package definition, and may have
multiple package definitions. When an activity is completed and a transition to the
next activity occurs, Documentum Server forwards to the next activity the package
or packages defined for the activated output port.
If the package you define is an XML file, you can identify a schema to be associated
with that file. If you later reference the package in an XPath expression in route case
conditions of a manual activity for an automatic transition, the schema is used to
validate the path. The XML file and the schema are associated using a relationship.
In Process Builder, you can define a package with no contents. This lets you design
workflows that allow an activity performer to designate the contents of the outgoing
package at the time he or she completes the activity.
If you are using Process Builder to create the workflow, a package definition is
global. When you define a package in Process Builder, the definition is assigned to
all input and output ports in all activities in the workflow. It is not necessary to
define packages for each link individually.
Note: Process Builder allows you to choose, for each activity, whether to make
the package visible or invisible to that activity. So, even though packages are
globally assigned, if a package is not needed for a particular activity, you can
make it invisible to that activity. When the activity starts, the package is
ignored-none of the generated tasks will reference that package.
The package definitions associated with two ports connected by a link must be
compatible.
The two ports referenced by a link must meet the following criteria to be considered
compatible:
If the port definitions are satisfied, the input port accepts the arriving packages by
changing the r_act_seqno, port_name, and package_name properties of those
packages.
In the figure, the output port named OUT1 of the source activity is linked to the
input port named IN1 of the destination activity. OUT1 contains a package
definition: Package A of type dm_document.
IN1 takes a similar package definition but with a different package name: Package B.
When the package is delivered from the port OUT1 to the port IN1 during execution,
the content of the package changes to reflect the transition:
In addition, at the destination activity, the server performs some bookkeeping tasks,
including:
Packages that are not needed to satisfy the trigger threshold are dropped. For
example, in the following figure, Activity C has two input ports: CI1, which accepts
packages P1 and P2, and CI2, which accepts packages P1 and P3. Assume that the
trigger threshold for Activity C is 1-that is, only one of the two input ports must
accept packages to start the activity.
Suppose Activity A completes and sends its packages to Activity C before Activity B
and that the input port, CI1 accepts the packages. In that case, the packages arriving
from Activity B are ignored.
– Prescribed
If an activity transition type is prescribed, the server delivers packages to all
the output ports. This is the default transition type.
– Manual
If the activity transition type is manual, the activity performers must indicate
at runtime which output ports receive packages.
– Automatic
If the activity transition type is automatic, you must define one or more route
cases for the transition.
• Warning timers
The warning timers automate delivery of advisory messages to workflow
supervisors and performers when an activity is not started within a given period
or is not completed within a given period.
Warning timers are defined when the activity is defined.
There are two types of warning timer:
– Pre-timers
A pre-timer sends email messages if an activity is not started within a given
time after the workflow starts.
– Post-timers
A post-timer sends messages when an activity is not completed within a
specified interval, counting from the start of the activity.
• Suspend timers
A suspend timer automates the resumption of a halted activity.
Suspend timers are not part of an activity definition. They are defined by a
method argument, at runtime, when an activity is halted with a suspension
interval.
If the control is enabled at the repository level, the setting in the individual
workflow definitions is ignored. If the control is not enabled at the repository level,
then you must decide whether to enable it for an individual workflow.
If you want to reference package component names in the task subject for any
activities in the workflow, do not enable package control. Use package control only
if you do not want to expose the object names of package components.
The validation verifies that both ports handle the same number of pakcages and the
package definitions in the two ports are compatible.
The method checks all possible pairs of output/input package definitions in the two
ports. If any pair of packages are incompatible, the connectivity test fails.
“Package compatibility” on page 177, describes the rules for package compatibility.
A process or activity definition must be in the validated state before you install it.
You can install activity definitions individually, before you install the process
definition, or concurrently with the process definition. You cannot install a process
definition that contains uninstalled activities unless you install the activities
concurrently. If you install only the process, the activities must be in the installed
state.
Refer to the associated Javadocs for information about the methods that install
process and activity definitions.
• dm_workflow
Workflow objects represent an instance of a workflow definition.
• dmi_workitem
When an activity starts, the server creates one or more work items for the
activity.
• dmi_package
• dmi_queue_item
The server uses a queue item object to direct a work item to an inbox.
• dmi_wf_timer
A workflow object contains properties that describe the activities in the workflow.
These properties are set automatically, based on the workflow definition, when the
workflow object is created. They are repeating properties, and the values at the same
index position across the properties represent one activity instance.
The properties that make up the activity instance identify the activity, its current
state, its warning timer deadlines (if any), and a variety of other information. As the
workflow executes, the values in the activity instance properties change to reflect the
status of the activities at any given time in the execution.
Work items are instances of the dmi_workitem object type. A work item object
contains properties that identify the activity that generated the work item and the
user or method to perform the work, record the state of the work item, and record
information for management.
The majority of the properties are set automatically, when the server creates the
work item. A few are set at runtime. For example, if the activity performer executes
a Repeat method to give the activity to a second round of performers, the work item
r_ext_performer property is set.
Work item objects are not directly visible to users. To direct a work item to an inbox,
the server uses a queue item object (dmi_queue_item). All work items for manual
activities have peer queue item objects.Work items for automatic activities do not
have peer queue item objects.
Users typically acquire a work item by selecting and opening the associated Inbox
task. Internally, an acquire method is executed when a user acquires a work item.
Acquiring a work item sets the work item state to acquired.
Users who have acquired a work item are called performers. The performer can
perform the required work or delegate the work to another user if the activity
definition allows delegation. The performer may also add or remove notes for the
objects on which the work is performed. If the user performs the work, at its
completion, the user can designate additional performers for the task if the activity
definition allows extension.
When a work item is finished, the performer indicates the completion through a
client interface. Only a work item performer, the workflow supervisor, or a user
with Sysadmin or superuser privileges can complete a work item.
Changing a work item priority generates an event that can be audited. Changing a
priority value also changes the priority value recorded in any queue item object
associated with the work item.
• The OpenText Documentum Server System Object Reference Guide lists the properties
in the dmi_workitem and dmi_queue_item object types.
• “Signature requirement support” on page 97, describes the options for signing
off work items.
If a particular skill level is required to perform the task associated with the package,
that information is stored in a dmc_wf_package_skill object. A wf package skill
object identifies a skill level and a package. The objects are subtypes of dm_relation
and are related to the workflow, with the workflow as the parent in the relationship.
In this way, the information stays with the package for the life of the workflow.
A single instance of a package does not move from activity to activity. Instead, the
server manufactures new copies of the package for each activity when the package is
accepted and new copies when the package is sent on.
• Pre-timer that alerts the workflow supervisor if an activity has not started within
a designated number of hours after the workflow starts
• Post-timer that alerts the workflow supervisor if an activity has not completed
within a designated number of hours after the activity starts
• Suspend timer that automatically resumes the activity after a designated interval
when the activity is halted
If the activity is not started by the specified date and time, the timer is considered to
be expired. Each execution of the dm_WfmsTimer job finds all expired timers and
invokes the dm_bpm_timer method on each. Both the dm_WfmsTimer job and the
dm_bpm_method are Java methods. The job passes the module config object ID to
the method. The method uses the information in that object to determine the action.
The dm_bpm_method executes in the Java method server.
“Warning and suspend timers” on page 180, describes each kind of timer.
– Activating a job
– Starting auditing
• The Webtop documentation contains information about accessing and using the
Webtop Workflow Reporting tool.
9.7 Attachments
Attachments are objects that users attach to a running workflow or an uncompleted
work item. Typically, the objects support the work required by the workflow
activities. For example, if a workflow is handling an engineering proposal under
development, a user might attach a research paper supporting that proposal.
Attachments can be added at any point in a workflow and can be removed when
they are no longer needed. After an attachment is added, it is available to the
performers of all subsequent activities.
Users with Sysadmin or Superuser user privileges can act as the workflow
supervisor. In addition, superusers are treated similar to the creator of a workflow
and can change object properties, if necessary. However, messages that warn about
execution problems are sent only to the workflow supervisor, not to superusers.
When Documentum Server creates an automatic activity, the server notifies the
workflow agent. The master session is quiescent until it receives a notification from
Documentum Server or until a specified sleep interval expires. When the master
session receives a notification or the sleep interval expires, the master session wakes
up. It executes a batch update query to claim a set of automatic activities for
execution and then dispatches those activities to the execution queue. After all
claimed activities are dispatched, the master session goes to sleep until either
another notification arrives or the sleep interval expires again.
You can change the configuration of the workflow agent by changing the number of
worker sessions and changing the default sleep interval. By default, there are three
worker sessions and the sleep interval is 5 seconds. You can configure the agent with
up to 1000 worker sessions. There is no maximum value on the sleep interval.
You can also trace the operations of the workflow agent or disable the agent.
Disabling the workflow agent stops the execution of automatic activities.
• workflow states
A workflow current state is recorded in the r_runtime_state property of the
dm_workflow object.
• activity states
• work item states
The state transitions are driven by API methods or by the workflow termination
criterion that determines whether a workflow is finished.
When a workflow supervisor first creates and saves a workflow object, the workflow
is in the dormant state. When the Execute method is issued to start the workflow,
the workflow state is changed to running.
Typically, a workflow spends its life in the running state, until either the server
determines that the workflow is finished or the workflow supervisor manually
terminates the workflow with the IDfWorkflow.abort method. If the workflow
terminates normally, its state is set to finished. If the workflow is manually
terminated with the abort method, its state is set to terminated.
A supervisor can halt a running workflow, which changes the workflow state to
halted. From a halted state, the workflow supervisor can restart, resume, or abort the
workflow.
The following figure illustrates the activity instance states and the operations or
conditions that move the instance from one state to another.
When an activity instance is created, the instance is in the dormant state. The server
changes the activity instance to the active state after the activity starting condition is
fulfilled and server begins to resolve the activity performers and generate work
items.
If the server encounters any errors, it changes the activity instance state to failed and
sends a warning message to the workflow supervisor.
The supervisor can fix the problem and restart a failed activity instance. An
automatic activity instance that fails to execute can also change to the failed state,
and the supervisor or the application owner can retry the activity instance.
The activity instance remains active while work items are being performed. The
activity instance enters the finished state only when all its generated work items are
completed.
A running activity can be halted. Halting an activity sets its state to halted. By
default, only the workflow supervisor or a user with Sysadmin or Superuser
privileges can halt or resume an activity instance.
When the server generates a work item for a manual activity, it sets the work item
state to dormant and places the peer queue item in the performer inbox. The work
item remains in the dormant state until the activity performer acquires it. Typically,
acquisition happens when the performer opens the associated inbox item. At that
time, the work item state is changed to acquired.
When the server generates a work item for an automatic activity, it sets the work
item state to dormant and places the activity on the queue for execution. The
application must issue the Acquire method to change the work item state to
acquired.
After the activity work is finished, the performer or the application must execute the
Complete method to mark the work item as complete. This changes the work item's
state to finished.
A work item can be moved manually to the paused state by the activity performer,
the workflow supervisor, or a user with Sysadmin or superuser privileges. A paused
work item requires a manual state change to return to the dormant or acquired state.
“Activity timers” on page 186, describes how suspension intervals are implemented.
Saving the new workflow object requires Relate permission on the process object
(the workflow definition) used as the workflow template. The execute method must
be issued by the workflow creator or supervisor or a user with Sysadmin or
superuser privileges. If the user is starting the workflow through a Documentum
client interface, such as Webtop, the user must also be defined as a Contributor.
This section describes how a typical workflow executes. It describes what happens
when a workflow is started and how execution proceeds from activity to activity. It
also describes how packages are handled and how a warning timer behaves during
workflow execution.
The following figure illustrates the general execution flow described in detail in the
text of this section.
• Sets the r_pre_timer property for those activity instances that have pre-timers
defined
• Examines the starting condition of each Begin activity and, if the starting
condition is met:
– Sets the r_post_timer property for the activity instance if a post timer is
defined for the activity
– Resolves performers for the activity
– Generates the activity's work items
– Sets the activity's state to active
• Records the workflow's start time
After the execute method returns successfully, the workflow's execution has begun,
starting with the Begin activities.
For Step and End activities, execution begins when a package arrives at one of the
activity input ports. If the package is accepted, it triggers the server to evaluate the
activity starting condition.
Note: For all activities, if the port receiving the package is a revert port and the
package is accepted, the activity stops accepting further packages, and the
server ignores the starting condition and immediately begins resolving the
activity performers.
When an activity input port accepts a package, the server increments the activity
instance r_trigger_input property in the workflow object and then compares the
value in r_trigger_input to the value in r_trigger_threshold.
If the two values are equal and no trigger event is required, the server considers that
the activity has satisfied its starting condition. If a trigger event is required, the
server will query the dmi_queue_item objects to determine whether the event
identified in r_trigger_event is queued. If the event is in the queue, then the starting
condition is satisfied.
If the two values are not equal, the server considers that the starting condition is not
satisfied.
The server also evaluates the starting condition each time an event is queued to the
workflow.
After a starting condition that includes an event is satisfied, the server removes the
event from the queue. If multiple activities use the same event as part of their
starting conditions, the event must be queued for each activity.
When the starting condition is satisfied, the server consolidates the accepted
packages if necessary and then resolves the performers and generates the work
items. If it is a manual activity, the server places the work item in the performer
inbox. If it is an automatic activity, the server passes the performer name to the
application invoked for the activity.
For example, suppose that Activity C accepts four packages: two Package_typeA,
one Package_typeB, and one Package_typeC. Before generating the work items, the
server will consolidate the two Package_typeA package objects into one package,
represented by one package object. It does this by merging the components and any
notes attached to the components.
The consolidation order is based on the acceptance time of each package instance, as
recorded in the i_acceptance_date property of the package objects.
For manual activities, the server uses the value in the performer_type property in
conjunction with the performer_name property, if needed, to determine the activity
performer. After the performer is determined, the server generates the necessary
work items and peer queue items.
If the server cannot assign the work item to the selected performer because the
performer has workflow_disabled set to TRUE in his or her user object, the server
attempts to delegate the work item to the user listed in the user_delegation property
of the performer user object.
If automatic delegation fails, the server reassigns the work item based on the setting
of the control_flag property in the definition of the activity that generated the work
item.
Note: When a work item is generated for all members of a group, users in the
group who are workflow disabled do not receive the work item, nor is the item
assigned to their delegated users.
If the server cannot determine a performer, a warning is sent to the performer who
completed the previous work item and the current work item is assigned to the
supervisor.
For automatic activities, the server uses the value in the performer_type property in
conjunction with the performer_name property, if needed, to determine the activity
performer. The server passes the name of the selected performer to the invoked
program.
The server generates work items but not peer queue items for work items
representing automatic activities.
When the performer_name property contains an alias, the server resolves the alias
using a resolution algorithm determined by the value found in the activity's
resolve_type property.
The master session of the workflow agent controls the execution of automatic
activities. The workflow agent is an internal server facility.
After the server determines the activity performer and creates the work item, the
server notifies the workflow agent master session that an automatic activity is ready
for execution. The master session handles activities in batches. If the master session
is not currently processing a batch when the notification arrives, the session wakes
up and does the following:
Note: If the Documentum Server associated with the workflow agent should
fail while there are work items claimed but not processed, when the server is
restarted, the workflow agent will pick up the processing where it left off. If the
server cannot be restarted, you can use an administration method to recover
those work items for processing by another workflow agent.
When a workflow agent worker session takes an activity from the execution queue,
it retrieves the activity object from the repository and locks it. It also fetches some
related objects, such as the workflow. If any of the objects cannot be fetched or if the
fetched workflow is not running, the worker session sets a_wq_name to a message
string that specifies the problem and drops the task without processing it. Setting
a_wq_name also ensures that the task will not be picked up again.
After all the fetches succeed and after verifying the ready state of the activity, the
worker thread executes the method associated with the activity. The method is
always executed as the server regardless of the run_as_server property setting in the
method object.
Note: If the activity is already locked, the worker session assumes that another
workflow agent is executing the activity. The worker session simply skips the
activity and no error message is logged. This situation can occur in repositories
with multiple servers, each having its own workflow agent.
If an activity fails for any reason, the selected performer receives a notification.
• Repository name
• User name (this is the selected performer)
• Login ticket
• Work item object ID
• Mode value
The mode value is set automatically by the server. The following table lists the
values for the mode parameter.
Value Meaning
0 Normal
1 Restart (previous execution failed)
2 Termination situation (re-execute because
workflow terminated before automatic
activity user program completed)
The method program can use the login ticket to connect back to the repository as the
selected performer. The work item object ID allows the program to query the
repository for information about the package associated with the activity and other
information it may need to perform its work.
If the two values are the same and extension is not enabled for the activity, the
server considers that the activity is completed. If extension is enabled, the server:
The following figure illustrates the decision process when the properties are equal.
If the number of completed work items is lower than the total number of work
items, the server then uses the values in transition_eval_cnt and, for activities with a
manual transition, the transition_flag property to determine whether to trigger a
transition. The transition_eval_cnt property specifies how many work items must be
completed to finish the activity. The transition_flag property defines how ports are
chosen for the transition. The following figure illustrates the decision process when
r_complete_witem and r_total_workitem are not equal.
If an activity transition is triggered before all the activity work items are completed,
Documentum Server marks the unfinished work items as pseudo-complete and
removes them from the inboxes of the performers. The server also sends an email
message to the performers to notify them that the work items have been removed.
After an activity is completed, the server selects the output ports based on the
transition type defined for the activity.
If the transition type is prescribed, the server delivers packages to all the output
ports.
If the transition type is manual, the user or application must designate the output
ports. The choices are passed to Documentum Server using one of the Setoutput
methods. The number of choices may be limited by the activity's definition. For
example, the activity definition may only allow a performer to choose two output
ports. How the selected ports are used is also specified in the activity's definition.
For example, if multiple ports are selected, the definition may require the server to
send packages to the selected revert ports and ignore the forward selections.
If the transition type is automatic, the route cases are evaluated to determine which
ports will receive packages. If the activity's r_condition_id property is set, the server
evaluates the route cases. If the activity's r_predicate_id property is set, the server
invokes the dm_bpm_transition method to evaluate the route cases. The
dm_bpm_transition method is a Java method that executes in the Java method
server. The server selects the ports associated with the first route case that returns a
TRUE value.
After the ports are determined, the server creates the needed package objects. If the
package creation is successful, the server considers that the activity is finished. At
this point, the cycle begins again with the start of the next activity's execution.
All process and activity definitions and workflow runtime objects must reside in a
single repository. A process cannot refer to an activity definition that resides in a
different repository. A user cannot execute a process that resides in a repository
different from the repository where the user is currently connected.
1. A work item is generated and assigned to user A (a remote user). A peer queue
item is also generated and placed in the queue. Meanwhile, a mail message is
sent to user A.
2. The notification agent replicates the queue item in user A home repository.
3. User A connects to the home repository and acquires the queue item. The user
home inbox makes a connection to the source repository and fetches the peer
work item. The home inbox executes the Acquire method for the work item.
4. User A opens the work item to find out about arriving packages. The user home
inbox executes a query that returns a list of package IDs. The inbox then fetches
all package objects and displays the package information.
5. When user A opens a package and wants to see the attached instructions, the
user home inbox fetches the attached notes and contents from the source
repository and displays the instructions.
6. User A starts working on the document bound to the package. The user home
inbox retrieves and checks out the document and contents from the source
repository. The inbox decides whether to create a reference that refers to the
bound document.
7. When user A is done with the package and wants to attach an instruction for
subsequent activity performers, the user home inbox creates a note object in the
source repository and executes the addNote method to attach notes to the
package. The inbox then executes the Complete method for the work item and
cleans up objects that are no longer needed.
Tasks are items sent to a user that require the user to perform some action. Tasks are
usually assigned to a user as a result of a workflow. When a workflow activity starts,
Documentum Server determines who is performing the activity and assigns that
user the task. It is also possible to send tasks to users manually.
Events are specific actions on specific documents, folders, cabinets, or other objects.
For example, a checkin on a particular document is an event. Promoting or demoting
a document in a lifecycle is an event. Documentum Server supports a large number
of system-defined events, representing operations such as checkins, promotions, and
demotions.
Tasks and event notifications are stored in the repository as dmi_queue_item objects.
Tasks generated by workflows also have a dmi_workitem object in the repository.
Tasks are sent to the inbox automatically, when the task is generated. Users must
register to receive events. Users can register to receive notifications of system-
defined events. When a system-defined event occurs, Documentum Server sends an
event notification automatically to any user who is registered to receive the event.
• “Work item and queue item objects” on page 183, describes work items.
9.14 Inboxes
In the Documentum system, you have an electronic inbox. that holds various items
that require your attention.
An inbox is a virtual container that holds tasks, event notifications, and other items
sent to users manually (using a queue method). For example, one of your employees
might place a vacation request in your inbox, or a coworker might ask you to review
a presentation. Each user in a repository has an inbox.
If you do not define home repositories for users, Documentum Server maintains an
inbox for each repository. Users must log in to each repository to view the inbox for
that repository. The inbox contains only those items generated within the repository.
All items that appear in an inbox are managed by the server as objects of type
dmi_queue_item. The properties of a queue item object contain information about
the queued item. For example, the sent_by property contains the name of the user
who sent the item and the date_sent property tells when it was sent.
The dmi_queue_item objects are persistent. They remain in the repository even after
the items they represent have been removed from an inbox, providing a persistent
record of completed tasks. Two properties that are set when an item is removed
from an inbox contain the history of a project with which tasks are associated. These
properties are:
• dequeued_by contains the name of the user that removed the item from the
inbox.
• dequeued_date contains the date and time that the item was removed.
The OpenText Documentum Server System Object Reference Guide contains the reference
information for the dmi_queue_item object type.
When you queue an object, including an event name is optional. You may want to
include one, however, to be manipulated by the application. Documentum Server
ignores the event name.
When you queue a workflow-related event, the event value is not optional. The
value you assign to the parameter should match the value in the trigger_event
property for one of the workflow's activities.
Although you must assign a priority value to queued items and events, your
application can ignore the value or use it. For example, the application might read
the priorities and present the items to the user in priority order. The priority is
ignored by Documentum Server.
You can also include a message to the user receiving the item.
• dequeued_by
This property contains the name of the user who dequeued the item.
• dequeued_date
This property contains the date and time that the item was dequeued.
The event can be a specific action on a particular object or a specific action on objects
of a particular type. You can also register to receive notification for all actions on a
particular object.
For instance, you might want to know whenever a particular document is checked
out. Or you might want to know when any document is checked out. You might
want to know when any action (checkin, checkout, promotion, and so forth)
happens to a particular document. Each of these actions is an event, and you can
register to receive notification when the event occurs. After you have registered for
an event, the server continues to notify you when the event occurs until you remove
the registration.
Although you must assign a priority value to an event when you use the
registerEvent method, your application can ignore the value or use it. This argument
is provided as an easy way for your application to manipulate the event when the
event appears in your inbox. For example, the application might sort out events that
have a higher priority and present them first. The priority is ignored by
Documentum Server.
You cannot register another user for an event. Executing a registerEvent method
registers the current user for the specified event.
Only a user with Sysadmin or superuser privileges can remove another user
registration for an event notification.
If you have more than one event defined for an object, the unRegister method only
removes the registration that corresponds to the combination of the object and the
event. Other event registrations for that object remain in place.
10.1 Overview
A lifecycle is one of the process management services provided with Documentum
Server. Lifecycles automate management of documents throughout their “lives” in
the repository.
A lifecycle is a set of states that define the stages in the life of an object. The states are
connected linearly. An object attached to a lifecycle progresses through the states as
it moves through its lifetime. A change from one state to another is governed by
business rules. The rules are implemented as requirements that the object must meet
to enter a state and actions to be performed on entering a state. Each state can also
have actions to be performed after entering a state.
Lifecycles contain:
• States
A lifecycle can be in one of a normal progression of states or in an exception
state.
• Attached objects
Any system object or subtype (except a lifecycle object itself) can have an
attached lifecycle.
• Entry and post entry actions
A lifecycle can trigger custom behavior in the repository when an object enters or
leaves a lifecycle state.
You use the Lifecycle Editor, accessed through Documentum Composer, to create a
lifecycle. Design states in the lifecycle can then attach an object (for example, a
document) to the lifecycle. Entry criteria apply to each state defined in the lifecycle.
For example, a lifecycle for a Standard Operating Procedure (SOP) might have states
representing the draft, review, rewrite, approved, and obsolete states of an SOP life.
Before an SOP can move from the rewrite state to the approved state, business rules
might require the SOP to be signed off by a company vice president, and converted
to HTML format for publishing on a company web site. After the SOP enters the
approved state, an action can send an email message to employees informing them
the SOP is available.
If an exception state is defined for a normal state, when an object is in that normal
state, you can suspend the object progress through the lifecycle by moving the object
to the exception state. Later, you can resume the lifecycle for the object by moving
the object out of the exception state back to the normal state or returning it to the
base state.
For example, if a document describes a legal process, you can create an exception
state to temporarily halt the lifecycle if the laws change. The document lifecycle
cannot resume until the document is updated to reflect the changes in the law.
Figure 10-1, shows an example of a lifecycle with exception states. Similar to normal
states, exception states have their own requirements and actions.
Which normal and exception states you include in a lifecycle depends on which
object types will be attached to the lifecycle. The states reflect the stages of life for
those particular objects. When you are designing a lifecycle, after you have
determined which objects you want the lifecycle to handle, decide what the life
states are for those objects. Then, decide whether any or all of those states require an
exception state.
• Stores the object ID of the lifecycle definition in the object r_policy_id property
• Sets the r_alias_set_id to the object ID of the alias set associated with the lifecycle,
if any
• Executes any actions defined for the state
• Sets the r_current_state property to the number of the state
From this point, the object continues through the lifecycle. If the object was attached
to a normal state, it can move to the next normal state, to the previous normal state,
or to the exception state defined for the normal state. If the object was attached to an
exception state, it can move to the normal state associated with the exception state or
to the base state.
Each time the object is moved forward to a normal state or to an exception state,
Documentum Server evaluates the entry criteria for the target state. If the object
satisfies the criteria, the server performs the entry actions, and resets the
r_current_state property to the number of the target state. If the target state is an
exception state, Documentum Server also sets r_resume_state to identify the normal
state to which the object can be returned. After changing the state, the server
performs any post-entry actions defined for the target state. The actions can make
fundamental changes (such as changes in ownership, access control, location, or
properties) to an object as that object progresses through the lifecycle.
If an object is demoted back to the previous normal state, Documentum Server only
performs the actions associated with the state and resets the properties. It does not
evaluate the entry criteria.
The following figure shows an example of a simple lifecycle with three states:
preliminary, reviewed, and published. Each state has its own requirements and actions.
The preliminary state is the base state.
When an object is attached to a state, Documentum Server tests the entry criteria and
performs the actions on entry. If the entry criteria are not satisfied or the actions fail,
the object is not attached to the state.
10.1.2.2.1 Promotions
Promotion moves an object from one normal state to the next normal state. Users
who own an object or are superusers need only Write permission to promote the
object. Other users must have Write permission and Change State permission to
promote an object. If the user has only Change State permission, Documentum
Server will attempt to promote the object as the user defined in the
a_bpaction_run_as property in the docbase config object. In those instances, that
user must be either the owner or a superuser with Write permission or have Write
and Change State permission on the object.
A promotion only succeeds if the object satisfies any entry criteria and actions on
entry defined for the target state.
It is possible to bypass the entry criteria. If you choose to do that, the server does not
enforce the entry criteria, but simply performs the actions associated with the
destination state and, on their completion, moves the object to the destination state.
You must own the lifecycle policy object or be a superuser to bypass entry criteria.
10.1.2.2.2 Demotions
Demotion moves an object from a normal state back to the previous normal state or
back to the base state. Demotions are only supported by states that are defined as
allowing demotions. The value of the allow_demote property for the state must be
TRUE. Additionally, to demote an object back to the base state, the return_to_base
property value must be TRUE for the current state.
Users who own an object or are superusers need only Write permission to demote
the object. Other users must have Write permission and Change State permission to
demote an object. If the user has only Change State permission, Documentum Server
will attempt to demote the object as the user defined in the a_bpaction_run_as
property in the docbase config object. In those instances, that user must be either the
owner or a superuser with Write permission or have Write and Change State
permission on the object.
If the object current state is a normal state, the object can be demoted to either the
previous normal state or the base state. If the object current state is an exception
state, the object can be demoted only to the base state. Demotions are accomplished
programmatically using one of the demote methods in the IDfSysObject interface.
10.1.2.2.3 Suspensions
Suspension moves an object from the current normal state to the state exception
state. Users who own an object or are superusers need only Write permission to
suspend the object. Other users must have Write permission and Change State
permission to suspend an object. If the user has only Change State permission,
Documentum Server will attempt to suspend the object as the user defined in the
a_bpaction_run_as property in the docbase config object. In those instances, that
user must be either the owner or a superuser with Write permission or have Write
and Change State permission on the object.
When an object is moved to an exception state, the server checks the state entry
criteria and executes the actions on entry. The criteria must be satisfied and the
actions completed to successfully move the object to the exception state.
It is possible to bypass the entry criteria. If you choose to do that, the server does not
enforce the entry criteria, but simply performs the actions associated with the
destination state and, on their completion, moves the object to the destination state.
You must own the lifecycle policy object or be a superuser to bypass entry criteria.
10.1.2.2.4 Resumptions
Resumption moves an object from an exception state back to the normal state from
which it was suspended or back to the base state. Users who own an object or are
superusers need only Write permission to resume the object. Other users must have
Write permission and Change State permission to resume an object. If the user has
only Change State permission, Documentum Server will attempt to resume the
object as the user defined in the a_bpaction_run_as property in the docbase config
object. In those instances, that user must be either the owner or a superuser with
Write permission or have Write and Change State permission on the object.
Additionally, to resume an object back to the base state, the exception state must
have the return_to_base property set to TRUE.
When an object is resumed to either the normal state or the base state, the object
must satisfy the target state entry criteria and action on entry. The criteria must be
satisfied and the actions completed to successfully resume the object to the
destination state.
It is possible to bypass the entry criteria. If you choose to do that, the server does not
enforce the entry criteria, but simply performs the actions associated with the
destination state and, on their completion, moves the object to the destination state.
You must own the lifecycle policy object or be a superuser to bypass entry criteria.
The job scheduling properties are set to the specified date and time. The job runs as
the user who issued the initial method that created the job, unless the
a_bpaction_run_as property is set in the repository configuration object. If that is set,
the job runs as the user defined in that property.
The destination state for a scheduled change can be an exception state or any normal
state except the base state. You cannot schedule the same object for multiple state
transitions at the same time.
7. If any one of the preceding steps fails, abort the transaction and return.
By default, the transition methods run as the user who issued the state-change
method. To change the default, you must set the a_bpaction_run_as property in the
docbase config object. If the a_bpaction_run_as property is set in the docbase config
object, the actions associated with state changes are run as the user indicated in the
property. Setting a_bpaction_run_as ensures that users with the extended
permission Change State but without adequate access permissions to an object are
able to change an object state. If the property is not set, the actions are run as the
user who changed the state.
Note: If you set the timeout_default value for the bp_transition method to a
value greater than five minutes, it is recommended that you also set the
client_session_timeout key in the server.ini file to a value greater than that
of timeout_default. The default value for client_session_timeout is five
minutes. If a procedure run by bp_transition runs more than five minutes
without making a call to Documentum Server, the client session will time out if
the client_sesion_timeout value is five minutes. Setting client_session_timeout
to a value greater than the value specified in timeout_default prevents that
from happening.
• “Scheduled transitions” on page 216, contains more information about the jobs
that process transitions.
When you define a lifecycle, you specify which types of object it handles. Lifecycles
are a reflection of the states of life of particular objects. Consequently, when you
design a lifecycle, you are designing it with a particular object type or set of object
types in mind. The scope of object types attachable to a particular lifecycle can be as
broad or as narrow as needed. You can design a lifecycle to which any SysObject or
SysObject subtype can be attached. You can also create a lifecycle to which only a
specific subtype of dm_document can be attached.
If the lifecycle handles multiple types, the chosen object types must have the same
supertype or one of the chosen types must be the supertype for the other included
types.
The chosen object types are recorded internally in two properties: included_type and
include_subtypes. These are repeating properties. The included_type property
records, by name, the object types that can be attached to a lifecycle. The
include_subtypes property is a Boolean property that records whether subtypes of
the object types specified in included_type may be attached to the lifecycle. The
value at a given index position in include_subtypes is applied to the object type
identified at the corresponding position in included_type.
For example, suppose a lifecycle definition has the following values in those
properties:
included_type[0]=dm_sysobject
included_type[1]=dm_document
include_subtypes[0]=F
include_subtypes[1]=T
For this lifecycle, users can attach any object that is the dm_sysobject type. However,
the only SysObject subtype that can be attached to the lifecycle is a dm_document or
any of the dm_document subtypes.
The object type defined in the first index position (included_type[0]) is called the
primary object type for the lifecycle. Object types identified in the other index
positions in included_type must be subtypes of the primary object type.
You can define a default lifecycle for an object type. If an object type has a default
lifecycle, when users create an object of that type, they can attach the lifecycle to the
object without identifying the lifecycle specifically. Default lifecycles for object types
are defined in the data dictionary.
The actions associated with a state can be used to reset permissions as needed.
Programs written for the entry criteria, actions on entry, and post-entry actions for a
particular lifecycle must be either all Java programs or all Docbasic programs. You
cannot mix programs in the two languages in one lifecycle.
Note: In entry criteria, you may use Docbasic Boolean expressions instead of or
in addition to a program regardless of the language used for the programs in
the actions and entry criteria.
It is possible to bypass entry criteria for a state. If you choose to do that, the server
does not enforce the entry criteria, but simply performs the actions associated with
the destination state and, on their completion, moves the object to the destination
state. Only the owner of the policy object that stores the lifecycle definition or a
superuser can bypass entry criteria.
State definitions include such information as the name of the state, a state type,
whether the state is a normal or exception state, entry criteria, and actions to
perform on objects in that state.
Validation of a lifecycle definition ensures that the lifecycle is correctly defined and
ready for use after it is installed. There are two system-defined validation programs:
dm_bp_validate_java and dm_bp_validate. The Java method is invoked by
Documentum Server for Java-based lifecycles. The other method is invoked for
Docbasic-based lifecycles. Each method checks the following when validating a
lifecycle:
Lifecycles that have passed validation can be installed. Only after installation can
users begin to attach objects to the lifecycle. A user must have Write permission on
the policy object to install a lifecycle.
When you design a lifecycle, you must make the following decisions:
• “Types of objects that can be attached to lifecycles” on page 219, describes how
the object types whose instances may be attached to a lifecycle are specified.
• “Lifecycle state definitions” on page 223, contains guidelines for defining
lifecycle states.
• “Lifecycles, alias sets, and aliases” on page 227, describes how alias sets are used
with lifecycles.
• “State types” on page 228, describes the purpose and use of state types.
• “State extensions” on page 228, describes the purpose and use of state extensions.
• State name
Each state must have a name that is unique within the policy. State names must
start with a letter, and cannot contain colons, periods, or commas. the state_name
property of the dm_policy object holds the names of the states.
• Attachability
Attachability is the state characteristic that determines whether users can attach
an object to the state. A lifecycle must have at least one normal state to which
users can attach objects. It is possible for all normal states in a lifecycle to allow
attachments. The number of states in a lifecycle that allow attachments depends
on the lifecycle.
Whether a state allows attachments is defined in the allow_attach property. This
is a Boolean property.
• Base state
The starting point in the lifecycle to which an object might be returned after a
particular action.
• Demotion
Demotion moves an object from one state in a lifecycle to a previous state. If an
object in a normal state is demoted, it moves to the previous normal state. If an
object in an exception state is demoted, it moves to the base state.
The ability to demote an object from a particular state is part of the state
definition. By default, states do not allow users to demote objects. Choosing to
allow users to demote objects from a particular state sets the allow_demote
property to TRUE for that state.
When an object is demoted, Documentum Server does not check the entry
criteria of the target state. However, Documentum Server does perform the
system and user-defined actions on entry and post-entry actions.
• Scheduled transitions
A scheduled transition moves an object from one state to another at a scheduled
date and time. Normal states can allow scheduled promotions to the next normal
state or a demotion to the base state. Exception states can allow a scheduled
resumption to a normal state or a demotion to the base state.
Whether a state can be scheduled to transition to another state is recorded in the
allow_schedule property. This property is set to TRUE if you decide that
transitions out of the state may be scheduled. It is set to FALSE if you do not
allow scheduled transitions for the state.
The setting of this property only affects whether objects can be moved out of a
particular state at scheduled times. It has no effect on whether objects can be
moved into a state at a scheduled time. For example, suppose StateA allows
scheduled transitions and StateB does not. Those settings mean that you can
promote an object from StateA to StateB on a scheduled date, but you cannot
demote an object from StateB to StateA on a scheduled date.
• Entry criteria
Entry criteria are the conditions an object must meet before the object can enter a
normal or exception state when promoted, suspended, or resumed. The entry
criteria are not evaluated if the action is a demotion. Each state may have its own
entry criteria.
If the lifecycle is Java-based, the entry criteria can be:
– A Java program
Access the lifecycle through the interface IDfLifecycleUserEntryCriteria.
– One or more Boolean expressions
– Both Boolean expressions and a Java program
Java-based programs are stored in the repository as SBO modules and a jar file.
The OpenText Documentum Foundation Classes Development Guide contains
information about SBO modules.
If the lifecycle is Docbasic-based, the entry criteria can be:
– A Docbasic program
– One or more Boolean expressions
– Both Boolean expressions and a Docbasic program
A set of pre-defined actions on entry are available through the Lifecycle Editor. You
can choose one or more of those actions, define your own actions on entry, or both.
If you define your own actions on entry, the program must be a Java program if the
lifecycle is Java-based. Java-based actions on entry are stored in the repository as
SBO modules and a JAR file. If the lifecycle is Docbasic-based, the actions on entry
program must be a Docbasic program.
If both system-defined and user-defined actions on entry are specified for a state, the
server performs the system-defined actions first and then the user-defined actions.
An object can only enter the state when all actions on entry complete successfully.
• System-defined actions
A set of pre-defined actions on entry are available for use. When you create or
modify a lifecycle using Lifecycle Editor, you can choose one or more of these
actions.
• Java programs
A Java program used as an action on entry program must implement the
interface IDfLifecycleUserAction.
• Docbasic programs
Docbasic actions on entry programs are stored in the repository as
dm_procedure objects. The object IDs of the procedure objects are recorded in the
user_action_id property. These properties are set internally when you identify
the programs while creating or modifying a lifecycle using Lifecycle Editor.
• Java-based
If the lifecycle is Java-based, the post-entry action programs must be Java
programs. The programs are stored in the repository as SBO modules and a JAR
file. A Java program used as an post-entry action program must implement the
IDfLifecycleUserPostProcessing interface.
• Docbasic-based
Docbasic post-entry actions are functions named PostProc and follow a specific
format.
Using aliases in actions can make it possible to design one lifecycle that can be
attached to all these kinds of documents. You can substitute an alias for a user or
group name in an ACL and in certain properties of a SysObject. You can use an alias
in place of a path name in the Link and Unlink methods.
In template ACLs, aliases can take the place of the accessor name in one or more
access control entries. When the ACL is applied to an object, the server copies the
template, resolves the aliases in the copy to real names, and assigns the copy to the
object.
In the Link and Unlink methods, aliases can replace the folder path argument. When
the method is executed, the alias is resolved to a folder path and the object is linked
to or unlinked from the proper folder.
When the actions you define for a state assign a new ACL to an object or use the
Link or Unlink methods, using template ACLs and aliases in the folder path
arguments ensures that the ACL for an object or its linked locations are always
appropriate.
If you want to use a custom validation program, the program must be written in the
same language as that used for any entry criteria, actions on entry, or post-entry
actions written for the lifecycle. This means that if those programs are written in
Java, the custom validation program must be in Java also. If the programs are
written in Docbasic, the validation program must be in Docbasic also.
After you write the program, use Documentum Composer to add the custom
validation program to the lifecycle definition. You must own the lifecycle definition
(the policy object) or have at least Version permission on it to add a custom
validation program to the lifecycle.
Additionally, you can use template ACLs, which contain aliases, and aliases in
folder paths in actions defined for states to make the actions usable in a variety of
contexts.
If you define one or more alias sets for a lifecycle definition, those choices are
recorded in the policy object's alias_set_ids property.
Note: Documentum Server does not use information stored in state extensions.
Extensions are solely for use by client applications.
You can add a state extension to any state in a lifecycle. State extensions are stored in
the repository as objects. The objects are subtypes of the dm_state_extension type.
The dm_state_extension type is a subtype of dm_relation type. Adding state
extensions objects to a lifecycle creates a relationship between the extension objects
and the lifecycle.
If you want to use state extensions with a lifecycle, determine what information is
needed by the application for each state requiring an extension. When you create the
state extensions, you will define a dm_state_extension subtype that includes the
properties that store the information required by the application for the states. For
example, suppose you have an application called EngrApp that will handle
documents attached to LifecycleA. This lifecycle has two states, Review and
Approval, that require a list of users and a deadline date. The state extension
subtype for this lifecycle will have two defined properties: user_list and
deadline_date. Or perhaps the application needs a list of users for one state and a list
of possible formats for another. In that case, the properties defined for the state
extension subtypes will be user_list and format_list.
State extension objects are associated with particular states through the state_no
property, inherited from the dm_state_extension supertype.
State extensions must be created manually. The Lifecycle Editor does not support
creating state extensions.
a lifecycle for use with objects that will be handled using DCM or WCM, the
lifecycle states must have state types that correspond to the state types expected by
the client. (Refer to the DCM and WCM documentation for the state type names
recognized by each.)
Custom applications can also use state types. Applications that handle and process
documents can examine the state_type property to determine the type of the object's
current state and then use the type name to determine the application behavior.
In addition to the repeating property that defines the state types in the policy object,
state types may also be recorded in the repository using dm_state_type objects. State
type objects have two properties: state_type_name and application_code. The
state_type_name identifies the state type and application_code identifies the
application that recognizes and uses that state type. You can create these objects for
use by custom applications. For example, installing DCM creates state type objects
for the state types recognized by DCM. DCM uses the objects to populate pick lists
displayed to users when users are creating lifecycles.
Use the Lifecycle Editor to assign state types to states and to create state type objects.
If you have subtyped the state type object type, you must use the API or DQL to
create instances of the subtype.
This chapter describes how aliases are implemented and used. Aliases support
Documentum Server's process management services.
11.1 Overview
Aliases are placeholders for user names, group names, or folder paths. You can use
an alias in the following places:
Note: Aliases are not allowed as the r_accessor_name for ACL entries of
type RequiredGroup or RequiredGroupSet.
• In workflow activity definitions (dm_activity objects), in the performer_name
property
• In a link or lnlink method, in the folder path argument
You can write applications or procedures that can be used and reused in many
situations because important information such as the owner of a document, a
workflow activity performer, or the user permissions in a document ACL is no
longer hard coded into the application. Instead, aliases are placeholders for these
values. The aliases are resolved to real user names, group names, or folder paths
when the application executes.
For example, suppose you write an application that creates a document, links it to a
folder, and then saves the document. If you use an alias for the document
owner_name and an alias for the folder path argument in the link method, you can
reuse this application in any context. The resulting document will have an owner
that is appropriate for the application context and be linked into the appropriate
folder also.
The application becomes even more flexible if you assign a template ACL to the
document. Template ACLs typically contain one or more aliases in place of accessor
names. When the template is assigned to an object, the server creates a copy of the
ACL, resolves the aliases in the copy to real user or group names, and assigns the
copy to the document.
Aliases are implemented as objects of type dm_alias_set. An alias set object defines
paired values of aliases and their corresponding real values. The values are stored in
the repeating properties alias_name and alias_value. The values at each index
position represent one alias and the corresponding real user name, group name, or
folder path.
%[alias_set_name.]alias_name
alias_set_name identifies the alias set object that contains the specified alias name.
This value is the object_name of the alias set object. Including alias_set_name is
optional.
alias_name specifies one of the values in the alias_name property of the alias set
object.
To put an alias in a SysObject or activity definition, use a set method. To put an alias
in a template ACL, use a grant method. To include an alias in a link or unlink
method, substitute the alias specification for the folder path argument.
For example, suppose you have an alias set named engr_aliases that contains an
alias_name called engr_vp, which is mapped to the user name henryp. If you set the
owner_name property to %engr_alias.engr_vp, when the document is saved to the
repository, the server finds the alias set object named engr_aliases and resolves the
alias to the user name henryp.
It is also valid to specify an alias name without including the alias set name. In such
cases, the server uses a predefined algorithm to search one or more alias scopes to
resolve the alias name.
If the alias specification includes an alias set name, the alias scope is the alias set
named in the alias specification. The server searches that alias set object for the
specified alias and its corresponding value.
If the alias specification does not include an alias set name, the server resolves the
alias by searching a predetermined, ordered series of scopes for an alias name
matching the alias name in the specification.
The scopes that are searched depend on where the alias is found.
• Workflow
• Session
• User performer of the previous work item
• The default group of the previous work item performer
• Server configuration
Within the workflow scope, the server searches in the alias set defined in the
workflow object r_alias_set_id property. This property is set when the workflow is
instantiated. The server copies the alias set specified in the perf_alias_set_id
property of the workflow definition (process object) and sets the r_alias_set_id
property in the workflow object to the object ID of the copy.
Within the session scope, the server searches the alias set object defined in the
session configuration alias_set property.
In the user performer scope, the server searches the alias set defined for the user
who performed the work item that started the activity containing the alias. A user
alias set is defined in the alias_set_id property of the user object.
In the group scope, the server searches the alias set defined for the default group of
the user who performed the work item that started the activity containing the alias.
The group alias set is identified in the alias_set_id property.
Within the server configuration scope, the search is conducted in the alias set
defined in the alias_set_id property of the server config object.
• Lifecycle
• Session
• User
• Group
• Server configuration
When the server searches within the lifecycle scope, it searches in the alias set
defined in the SysObject r_alias_set_id property. This property is set when the object
is attached to a lifecycle.
Within the session scope, the server searches the alias set object defined in the
session configuration alias_set property.
Within the user scope, the search is in the alias set object defined in the alias_set_id
property of the user object. The user is the user who initiated the action that caused
the alias resolution to occur. For example, suppose a a document is promoted and
the actions of the target state assign a template ACL to the document. The user in
this case is either the user who promoted the document or, if the promotion was part
of an application, the user account under which the application runs.
In the group scope, the search is in the alias set object associated with the user
default group.
Within the system scope, the search is in the alias set object defined in the
alias_set_id property of the server config object.
The server uses the following algorithm to choose a default lifecycle scope:
• The server uses the alias set defined for the session scope if that alias set is listed
in the policy object alias_set_ids property.
• If the session scope's alias set isn't found, the server uses the alias set defined for
the user's scope if it is in the alias_set_ids list.
• If the user scope alias set is not found, the server uses the alias set defined for the
user default group if that alias set is in the alias_set_ids list.
• If the default group scope alias set is not found, the server uses the alias set
defined for the system scope if that alias set is in the alias_set_ids list.
• If the system scope's alias set isn't found, the server uses the first alias set listed in
the alias_set_ids property.
If the policy object has no defined alias set objects in the alias_set_ids property, the
SysObject r_alias_set_id property is not set, and an error is generated.
If there is no alias_set_name defined in the alias specification, the server uses the
following algorithm to resolve the alias_name:
• The server first searches the alias set defined in the object r_alias_set_id property.
This is the lifecycle scope.
• If the alias is not found in the lifecycle scope or if r_alias_set_id is undefined, the
server looks next at the alias set object defined for the session scope.
• If the alias is not found in the session scope, the server looks at the alias set
defined for the user scope.
• If the alias is not found in the user scope, the server looks at the alias set defined
for the user default group scope.
• If the alias is not found in the user default group scope, the server looks at the
alias set defined for the system scope.
If the server does no't find a match in any of the scopes, it returns an error.
If an alias set name is not defined in the alias specification, the server resolves the
alias name in the following manner:
• If the object to which the template is applied has an associated lifecycle, the
server resolves the alias using the alias set defined in the r_alias_set_id property
of the object. This alias set is the object lifecycle scope. If no match is found, the
server returns an error.
• If the object to which the template is applied does not have an attached lifecycle,
the server resolves the alias using the alias set defined for the session scope. This
is the alias set identified in the alias_set property of the session config object. If a
session scope alias set is defined, but no match is found, the server returns an
error.
• If the object has no attached lifecycle and there is no alias defined for the session
scope, the server resolves the alias using the alias set defined for the user scope.
This is the alias set identified in the alias_set_id property of the dm_user object
for the current user. If a user scope alias set is defined but no match is found, the
server returns an error.
• If the object has no attached lifecycle and there is no alias defined for the session
or user scope, the server resolves the alias using the alias set defined for the user
default group. If a group alias set is defined but no match is found, the system
returns an error.
• If the object has no attached lifecycle and there is no alias defined for the session,
user, or group scope, the server resolves the alias using the alias set defined for
the system scope. If a system scope alias set is defined but no match is found, the
system returns an error.
If no alias set is defined for any level, Documentum Server returns an error stating
that an error set was not found for the current user.
Resolving aliases when the workflow is started requires user interaction. The person
starting the workflow provides alias values for any unpaired alias names in the
workflow definition alias set.
When the workflow is instantiated, the server copies the alias set and attaches the
copy to the workflow object by setting the workflow r_alias_set_id property to the
object ID of the copy.
If the workflow scope is used at runtime to resolve aliases in the workflow activity
definitions, the scope will have alias values that are appropriate for the current
instance of the workflow.
• Default
• Package
• User
• Workflow
• Session
• User performer of the previous work item
• The default group of the previous work item performer
• Server configuration
The server examines the alias set defined in each scope until a match for the alias
name is found.
If the resolve_pkg_name property is not set, the search begins with the package
defined in r_package_name[0]. The components of that package are searched. If a
match is not found, the search continues with the components in the package
identified in r_package_name[1]. The search continues through the listed packages
until a match is found.
• The alias set defined for the user performer of the previous work item
• The alias set defined for the default group of the user performer of the previous
work item
The server first searches the alias set defined for the user. If a match isn't found, the
server searches the alias set defined for the user default group.
• 1 (user)
• 2 (group)
• 3 (user or group)
If the alias_category is appropriate, the server next determines whether the alias
value is a user or group, depending on the setting in the activity performer_type
property. For example, if performer_type indicates that the designated performer is
a user, the server will validate that the alias value represents a user, not a group. If
the alias value matches the specified performer_type, the work item is created for
the activity.
• Generates a warning
• Posts a notification to the inbox of the workflow supervisor
• Assigns the work item to the supervisor
12.1 Overview
Internationalization refers to the ability of the Documentum Server to handle
communications and data transfer between itself and client applications in a variety
of code pages. This ability means that the Documentum Server does not make
assumptions based on a single language or locale. (A locale represents a specific
geographic region or language group.)
Documentum Server runs internally with the UTF-8 encoding of Unicode. The
Unicode Standard provides a unique number to identify every letter, number,
symbol, and character in every language. UTF-8 is a varying-width encoding of
Unicode, with each single character represented by one to four bytes.
Documentum Server handles transcoding of data from national character sets (NCS)
to and from Unicode. A national character set is a character set used in a specific
region for a specific language. For example, the Shift-JIS and EUC-JP character sets
are used for representing Japanese characters. ISO-8859-1 (sometimes called Latin-1)
is used for representing English and European languages. Data can be transcoded
from a national character set to Unicode and back without data loss. Only common
data can be transcoded from one NCS to another. Characters that are present in one
NCS cannot be transcoded to an NCS in which they are not available.
Note: It is recommended that all XML content use one code page.
12.3 Metadata
The metadata values you can store depend on the code page of the underlying
database. The code page may be a national character set or it may be Unicode.
If the database was configured using a national character set as the code page, you
can store only characters allowed by that code page. For example, if the database
uses EUC-KR, you can store only characters that are in the EUC-KR code page as
metadata.
All code pages supported by the Documentum System include ASCII as a subset of
the code page. You can store ASCII metadata in databases using any supported code
page.
If you configured the database using Unicode, you can store metadata using
characters from any language. However, your client applications must be able to
read and write the metadata without corrupting it. For example, a client using the
ISO-8859-1 (Latin-1) code page internally cannot read and write Japanese metadata
correctly. Client applications that are Unicode-compliant can read and write data in
multiple languages without corrupting the metadata.
12.5 Constraints
A UTF-8 Unicode repository can store metadata from any language. However, if
your client applications are using incompatible code pages in national character sets,
they may not be able to handle metadata values set in different code page. For
example, if an application using Shift-JIS or EUC-JP (the Japanese code pages) stores
objects in the repository and another application using ISO-8859-1 (Latin-1 code
page) retrieves that metadata, the values returned to the ISO-8859-1 application will
be corrupted because there are characters in the Japanese code page that are not
found in the Latin-1 code page.
During the server installation process, a number of configuration parameters are set
in the server.ini file and server config object that define the expected code page
for clients and the host machine operating system. These parameters are used by the
server in managing data, user authentication, and other functions.
The Documentum system has recommended locales for the server host and
recommended code pages for the server host and database.
• locale_name
The locale of the server host, as defined by the host operating system. The value
is determined programmatically and set during server installation. The
locale_name determines which data dictionary locale labels are served to clients
that do not specify their locale.
• default_client_codepage
The default code page used by clients connecting to the server. The value is
determined programmatically and set during server installation. It is strongly
recommended that you do not reset the value.
• server_os_codepage
The code page used by the server host. Documentum Server uses this code page
when it transcodes user credentials for authentication and the command-line
arguments of server methods. The value is determined programmatically and set
during server installation. It is strongly recommended that you do not reset the
value.
The following properties for internationalization are present in a client config object:
• dfc.codepage
The dfc.codepage property controls conversion of characters between the native
code page and UTF-8. The value is taken from the dfc.codepage key in the dfc.
properties file on the client host. This code page is the preferred code page for
repository sessions started using the DFC instance. The value of dfc.codepage
overrides the value of the default_client_codepage property in the server config
object.
The default value for this key is UTF-8.
• dfc.locale
This is the client preferred locale for repository sessions started by the DFC
instance.
The following properties for internationalization are set in the session config object:
• session_codepage
This property is obtained from the client config object dfc.codepage property. It
is the code page used by a client application connecting to the server from the
client host.
If needed, set the session_codepage property in the session config object early in
the session and do not reset it.
• session_locale
The locale of the repository session. The value is obtained from the dfc.locale
property of the client config object. If dfc.locale is not set, the default value is
determined programmatically from the locale of the client host machine.
1. Use the values supplied programmatically by an explicit set on the client config
object or session config object.
2. If the values are not explicitly set, examine the settings of dfc.codepage and
dfc.locale keys in the dfc.properties file.
If not explicitly set, the dfc.codepage key and dfc.locale keys are assigned default
values. DFC derives the default values from the Java Virtual Machine (JVM),
which gets the defaults from the operating system.
• dm_user.user_name
• dm_user.user_os_name
• dm_user.user_db_name
• dm_user.user_address
• dm_group.group_name
The requirements for these differ depending on the site configuration. If the
repository is a standalone repository, the values in the properties must be
compatible with the code page defined in the server server_os_codepage property.
(A standalone repository does not participate in object replication or a federation,
and its users never access objects from remote repositories.)
12.8.2 Lifecycles
The scripts that you use as actions in lifecycle states must contain only ASCII
characters.
12.8.3 Docbasic
Docbasic does not support Unicode. For all Docbasic server methods, the code page
in which the method is written and the code page of the session the method opens
must be the same and must both be the code page of the Documentum Server host
(the server_os_codepage).
Docbasic scripts that run on client machines must be in the code page of the client
operating system.
12.8.4 Federations
Federations are created to keep global users, groups, and external ACLs
synchronized among member repositories.
A federation can include repositories using different server operating system code
pages (server_os_codepage). In a mixed-code page federation, the following user
and group property values must use only ASCII characters:
• user_name
• user_os_name
• user_address
• group_address
In mixed code page environments, the source and target folder names must contain
only ASCII characters. The folders contained by the source folder are not required to
be named with only ASCII characters.