IyCnet CX Supervisor Database
IyCnet CX Supervisor Database
ETC - UK
SYSMAC SCS V2.2 – DATABASE
Cx-Supervisor
www.infoplc.net
Distribution:
Internal
External
Purpose of Document
Describe the new Database features in SYSMAC-SCS Version 2.2 and provide guidelines
on how to use them.
Important Note: This document is at the DRAFT stage and some of the information
contained in it may be incorrect or subject to change.
Database(514TN002).doc
SYSMAC-SCS Database
1. Database Overview
SYSMAC-SCS V2.2 Database facilities provide: fast, transparent access to many different
data sources, via a database technology called - ADO (Active Data Objects).
A new Database editor has been created in the Development Workspace, enabling users to
create Connections, Recordsets and Association objects in a familiar Tree View
(hierarchical) format. This editor is unique in SCS, in that actual database connections can
be tested online in the Development environment. The ability to connect online also has the
added benefit of providing assistance in creating objects lower down in the hierarchy. This
editor has been designed to enable a large proportion of the database functionality, to be
performed automatically (i.e. without the need for Script functions). Although a
comprehensive set of Database Script functions are available.
1.1 Connection
A connection contains the details used to access a data source. This can either be
via a Data Source Name (DSN), filename or directory.
1.2 Recordset
A Table within a database, this could either be an actual Table or a table that has
been generated as a consequence of running a Query.
1.5 OLEDB
A Provider is something that, unsurprisingly, provides data. This isn’t the physical
source of the data, but the OLEDB mechanism that connects us to the physical
source of the data. The provider may get the data directly from the data store, or
may go through a third party product, such as ODBC.
1.7 Schema
Page 2 of 47
2.1 Configuring Database connections
Connections are added to the Workspace by using a right mouse-button context sensitive
menu option ‘Add Connection…’ which invokes the following dialog:
For convenience, a unique Connection name is created automatically, this can be change to
give a more meaningful description of the connection, if required. In the above dialog, an
Access database file has been selected as a Data Source via the File Browse button. The
checkbox ‘Connect on Application Start-up’ provides the option of automatically connecting
to the Database when the Runtime application is started.
Page 3 of 47
4. FoxPro Files (*.dbf)
If your data source is not in the above list or you have your own drivers for a particular
database, the ‘Connection String’ can be modified using this dialog (consult your database
documentation for the required connection string).
If you make a mistake while editing the ‘connection string’, the original string can be
restored by selecting the ‘Build Connection String’ button. (A new connection string will be
built automatically each time a change of Data Source is made).
Page 4 of 47
Connecting to CSV/Text files is slightly different from an actual Database connection, in that
only the ‘Directory’ that contains the required files should be supplied as a Data Source. (if
a file is selected, the connection will fail). The actual file to be used is specified when
configuring the Recordset, this will be explained in the section on configuring Recordsets.
e.g. If a collection of text or csv files are contained in the directory C:\Text then a valid
connection ‘Data Source’ is defined below:
Data Source=C:\Text\
A detailed description of what type of error occurred (supplied by the underlying Data
Provider) can be viewed, by ensuring that the right-menu option ‘Show Error’ is ‘checked’.
Whenever an error is generated by a Data Provider a description of the error and its source
will be displayed in a Dialog. The ‘Show Error’ option is specific to each Connection.
Example: The following error was generated by the ‘Jet Database Engine’ (due to a typo in
the Database name):
Page 5 of 47
Schemas enable information about a Database to be obtained from a provider, there are a
larger number of schemas available, which are listed in detail in Appendix C. The most
useful feature of schema is the ability to obtain Table and Query names from the Database,
in fact schemas are used to populate the Combo boxes when working with ‘live’
connections.
A schema, is configured by selecting the desired Connection and choosing the right menu
option ‘Add Schema…’ to invoke the following dialog:
Name The default name has been modified from the automatically supplied name
‘Schema1’ to a more meaningful name.
Point The name of an array point which will hold the results of the schema request.
Type A list of available Schema Types, in this instance the ‘Tables’ schema has
been chosen this is probably the most useful schema in that it enables all the
Tables and Queries available in the Database.
Criteria This Combo box is automatically populated, dependant on the Schema Type
chosen, in this case the criteria ‘TABLE_NAME’ has been selected.
Filter The Filter entry is only enabled when appropriate, in this case ‘TABLE’ has
been chosen which will ensure that only a list of available Tables are
returned, to select a list of Queries select the ‘VIEW’ option.
If the Connection is live, then a ‘Preview’ button will be enabled on the dialog, which allows
you to view the configured schema results.
A checkbox is provided, which gives the option of automatically loading the schema results
into the associated point when the Connection is opened, or if unchecked the schema
results can be obtained via script command when required.
The Schema ‘Type’, ‘Criteria’ and ‘Filter’ values can be modified at Runtime via the
DBSchema() function.
Page 6 of 47
Transactions can be applied to a connection. A Transaction provides atomicity to a series of
data changes to a recordset (or recordsets) within a connection, allowing all of the changes
to take place at once, or not at all. Once a transaction has been started, any changes to a
recordset attached to the connection, are cached until the transaction is committed or
cancelled. (Note: not all Providers support transactions).
Transactions can be nested i.e. you can have transactions within transactions allowing you
to segment your work in a more controlled manner. Several DBExecute commands are
available for managing transactions. The following pseudo code examples demonstrates
the use of nested transactions:
Example 1
BeginTrans
Do some work A
BeginTrans
Example 2
BeginTrans
Do some work A
BeginTrans
BeginTrans
Care should be taken to ensure that each ‘BeginTrans’ is matched with a ‘CommitTrans’ or
‘RollbackTrans’ to ensure that your work is saved or discarded as required (a DBExecute
command ‘TransCount’ is available, which returns the number of pending transactions). If
there are any pending transactions when a connection is Closed, the user, will be prompted
to either ‘Commit’ or ‘Rollback’ these outstanding transactions. :
2.2 Recordsets
Page 7 of 47
The Recordset is the heart of the Database facility, it contains all of the columns and rows
returned from a specific action. The Recordset is used to navigate a collection of records,
and update, add, delete or modify records.
Once a Connection has been added to the Workspace, the right menu option ‘Add
Recordset’ will be enabled. Selecting this option will invoke the following dialog:
As with the Connection dialog, a unique Recordset name will be automatically provided, This
can be modified to provide a more meaningful name if required. There is a checkbox
available to determine if the Recordset is to be automatically opened when the parent
Connection is opened. If this is unchecked, the Recordset must be option via script
command.
Recordset Options
The Recordset option determines how a particular Recordset is created, there are three
choices as follows:
1. Table Name
2. Server Query
This is the name of a Query stored in the Database, this Query will be run when the
Recordset is opened to produce the desired records. If the Query requires parameters, then
values for each parameter can be supplied by adding the correct number of Parameter
Associations to the Recordset. Parameter Associations are explained in more detail, later in
the document.
3. SQL Text
Page 8 of 47
An edit box is displayed when this selection is made. This edit box can be used to type in a
free format SQL Text string, which will be executed when the Recordset is opened to
produce the desired records.
Running Queries
It is more efficient to run Sever side Queries, i.e. queries that are stored in the actual
Database, because these queries are stored in a compiled/tested form. Whereas SQL Text
has to compiled on the fly every time it is executed. However Server side queries are ‘fixed’,
for the duration of a project, SQL Text can be modified at runtime, enabling different
Queries to be run for varying situations.
CSV/Text Connections
For Database connections all three of the above options are available, but for text/csv
connections only one option is available, namely ‘SQL Text’. For convenience, a facility is
provided for automatically building the required SQL Text for this type of connection. This
facility is invoked from the ‘Build SQL…’ button shown below:
This will bring up a dialog with a list of all valid files in the ‘Directory’ specified in the Parent
Connection (ref: 2.1). After choosing a file and exiting from the ‘Build SQL’ dialog the
required SQL Text is built. Note: In the above example, the file ‘Tables.txt’ was chosen, but
this will be written as Table#txt in the SQL Text as most Providers will not accept the ‘.’
character, because it is used as a delimiter.
The above type of csv/text connection only supports ‘read only’ operations. CSV/Text files
can only be updated, by converting the data into an Excel spreadsheet and accessing the
file via the ODBC DSN driver. This is achieved by carrying out the following steps:
1. Create a File DSN for the required csv/text file with the following options (see Appendix
B for details of how to create DSNs)
Page 9 of 47
a) Select the Microsoft Excel Driver (*.xls). If this option does not exist, you will need to
install the Microsoft ODBC driver for Excel from the Excel setup.
2. Load the csv/text data into an Excel spreadsheet and create a table to access the data
by creating a Named Range as follows:
a) Highlight the row(s) and column(s) area where your data resides (including the
header row).
b) On the ‘Insert’ menu, point to ‘Name’, click ‘Define’ and enter a name for your range.
Note: The CSV/Text files must conform to the following rules in order to achieve a
successful connection.
• The first row of the range is assumed to contain the Column Headings. The Excel driver
seems to be a bit finicky when it comes to column headings (Note: this is only the case
when updating files, reading does not have the same restrictions) i.e column headings
cannot contain numbers or spaces e.g. “Column1” or “Invoice Total” are invalid. I also
found that simply using the word “Number” caused any error.
• Make sure that all the cells in a column are of the same data type. The Excel ODBC
driver cannot correctly interpret which data type the column should be if a column is not
of the same type, or you have types mixed between “text” and “general”.
• This type of querying and updating information in an Excel Spreadsheet does not
support multi-user concurrent access.
Page 10 of 47
2 To make a connection to the newly created table. Create a connection in the
Workspace specifying the File DSN as its source. Add a Recordset to the connection
and select the Named Range (which will appear in the list of available tables, if the
connection is live) as the Table name, records in this table can now be added or
modified as with any other database table. (Note: If records are added to this type of
table the Named Range will increase in size accordingly).
Recordset Locks
The lock option enables the Recordset to be opened in either read only or read/write modes,
there are two type of read/write locks as defined below:
Read Only The default lock is read only i.e. data cannot be changed.
Pessimistic Locks records when you start editing and releases the lock when Update() (or
Cancel()) is called, no need to worry about a conflict with other users, but can
cause records to be locked for long periods of time preventing other users
from accessing the same records.
Optimistic Locked only when the Update() method is called, therefore changes can be
made to records without creating a lock, conflicts have to be catered for
because someone else might have changed the record between the time you
started editing and the time you called Update().
Note: If the parent connection is open when a Recordset is added. Then the
Combo boxes for ‘Table Name’ and ‘Server Query’ will be automatically populated
with valid entries for the selected Database. When the ‘Add Recordset’ dialog is
closed an attempt will be made to open the newly configured Recordset.
Field associations, provide a means of connecting SCS Points with fields (i.e. columns of
data) in a Recordset, thus enabling data transfers to be made between Points and Records.
Once a Recordset has been added to a Connection in the Workspace, the right menu option
‘Add Field…’ will be enabled. Selecting this option will invoke the following dialog:
Page 11 of 47
As with the Connection and Recordset dialogs, a unique Field name will be automatically
provided. This can be modified to provide a more meaningful name if required.
Field The name of the Recodset field to be associated with the above point.
If the parent Recordset is open, this Combo box will be automatically
populated with all available fields.
Field Property The type of information from the field to be transferred, the following
options are available:
Note: The Name, Type and Size properties are fixed for all entries of
the column, whereas the field value depends on the current position of
the Recordset.
A checkbox is available (default unchecked), which provides the option of using a numeric
index to identify a particular field instead of its name. This is useful if you want to configure
generic field associations.
A checkbox is available (default = checked), which provides the option of transferring data
from the Recordset field to the associated point, when the Recordset is opened.
Page 12 of 47
The ‘Add’ property is specifically designed to enable fields to be added together to create
new records. They are not involved in any read operations, as with the other field property
types (for this reason, the ‘Automatically read on open’ checkbox is disabled when this type
is applied). When creating configurations to add new records you will need to create a ‘Add’
association for every field required to ‘create’ a valid record i.e. primary keys, non-null
values etc. need to be catered for. Ref: see DBAddNew() for more details.
An important concept to bear in mind when adding Field Associations, is paging, because
the number of records in a Recordset can be quite large i.e. many thousands in extreme
cases, it is obvious that these records will not fit into an SCS Array point (max 1024
elements). Hence the need for a mechanism, whereby records can be manipulated in ‘bite
sized chunks’. Paging is supported by the Database script functions, thus enabling you to
manipulate/navigate a page at a time. You can of course work with single length records by
simply associating single length points with the required fields.
SCS adopts a mechanism of automatically determining a page size, by using the number of
elements in the Array Points used in Field Associations, i.e. if an array point with 10
elements is used then a page size of 10 will be used.
Note1: In order for paging to work sensibly, you should ensure that all array points used in
multiple field associations for a particular Recordset (paging is local to individual
Recordsets) are of the same size. If arrays, of differing length are used, the smallest array
size will be adopted as the page size.
Note2: Paging only operates on Field Associations that have the Property Type ‘Value’
selected, this enables you to have Field Associations with a Property Type of ‘Name’ or
‘Add’ associated with single points in the same Recordset, without effecting the page size
determined by the array points.
Note3: Paging is designed to operate at the Recordset level (the concept of levels is
explained in the section on DB Script functions). If you perform a Read operation on a
recordset that has paging in force, then a ‘page’ of records will be read into all the Field
Associations connected to the Recordset. In contrast to performing a read operation at the
Field level which will override the page size and use the individual fields length.
The following example shows how parameter associations are configured in conjunction with
a Recordset to supply a parameter query with its required values.
Page 13 of 47
A Recordset is configured with the option ‘Server Query’ and a Parameter Query:-
‘Employees Sales By Country’ has been selected, this Query takes two parameters as
follows:
1. Beginning Date
2. End Date,
both parameters are of type ‘Date/Time’, this query will select all records that fall between
the two dates supplied.
The first parameter association is configured, by selecting the right menu option ‘Add
Parameter…’ to invoke the following dialog:
1 Name The default name ‘Param1’ has been replaced with a more meaningful
name that reflects the nature of the first parameter.
2 Index The default value of 1 has been used, the index is used to determine
which parameter in the Query to associate the value with. The index is
automatically incremented for each parameter that is added to the
Recordset.
3 Data Type The Data Type combo box will be populated with a selection of available
data types, the correct data type for the parameter being configured
must be selected, otherwise the Recordset will fail to open.
4 Value There are two ways to enter values either a fixed value as above or
selecting a point to hold the value, the check box determines which
option is in use.
Page 14 of 47
In this case a point is used to hold the parameter value, thus enabling it to modified during
Runtime to produce different records.
Note : the index has been automatically incremented to 2. (care must be taken if Parameter
Association are deleted, to ensure that the indexes are updated accordingly to match the
correct parameter). If this Query is being run in the Development environment then the
default value for ‘txtEndDate’ should be set to a suitable value i.e. “11/93”
A comprehensive set of Database script functions are available, these functions provide a
means of performing operations on configured connections/recordsets, such as Open,
Close, Add, Delete, Modify and Navigation etc. All Database functions take a ‘Connection
Level String’ as their first parameter, this string determines what level in the Database Tree
Hierarchy, is to be operated on, the string syntax and some examples are listed below:
Examples:
Page 15 of 47
2.5.1.4 Variant = DBExecute(Level, Command, [Parameters…])
Support is provided to help you build the above Database functions into an SCS Script, this
support is provided by a ‘Database Function dialog’, which is invoked from the Script
Editors, Special menu option ‘Database’. Selecting this option will display the above list of
Script functions, choosing the required function, will invoke the Database Function dialog
configured to help you build the selected function. It also provides guidance in choosing the
correct type of point to use and automatically populates Combo boxes in a context sensitive
manner, to help when multiple choice parameter selections are required.
The following ‘Database Function dialog’ was invoked by selecting the DBMove function:
Page 16 of 47
The DBMove function operates on Recordsets, therefore the ‘Connection Level String’
group consists of two Combo boxes, one for the Relevant Connection and one for the
Recordset level. These two Combo boxes will be automatically populated with the
Connection and Recordset names already configured in the Database Workspace View.
The DBMove function takes a second parameter ‘Direction’, a Combo box for this parameter
is also populated with the available choices. On selecting OK the following function string
will be added to your script:
The next example shows a function ‘DBRead’ which can operate on both Recordset and
Field levels, note: the ‘Field’ Combo box has a checkbox beside it, this is used to indicate
whether or not the Field level is in use.
Selecting OK for this configuration will result in the following string being added to your
script:
DBRead( "CSV.Results.Real1" )
Note: While the ‘Database function’ dialog, goes a long way to help you build the most
popular options for each DB Function. It does not (at present) support every combination of
every script function parameter(s) and return values available. You will need to consult the
detailed function description, for a full list of all parameters available.
Example:
Page 17 of 47
DBOpen(“Connection1”) ‘ Open a connection
Example:
The DBMove function enables you to navigate around a Recordset by moving the position of
the ‘current record’ in the Recordset. This function only operates at the Recordset level.
When a Recordset is first opened the first record is the current record, the position of the
current record can be moved, by supplying one of the following ‘Direction’ options:
1. "First"
2. "Last"
3. "Next"
4. "Previous"
5. "Position"
6. "FirstPage"
7. "LastPage"
8. "NextPage"
9. "PreviousPage"
10. "Page"
11. "Bookmark"
The ‘Direction’ options “Position, “Page” and “Bookmark” require the use of the third
parameter ‘Position’ to indicate the absolute position to move to. This parameter is of type
‘Variant’ because both “Position” and “Page” are Integer values, whereas “Bookmark” is a
Real value. Note: bookmarks are returned from the function ‘DBProperty’, they enable you
to return to a ‘marked’ record, even after records have been added or deleted
Page 18 of 47
Notes: Some Providers do not support Move(“Previous”) operations. i.e. cursors are
‘Forward-only’, some ‘Forward-Only’ providers do allow Move(First), while some are strictly
Forward-Only i.e. the Recordset has to be Re-queried effectively a combined Close then
Open operation to reset the cursor back to the start of the Recordset. Some Providers that
do support Move(“Previous”) do not support Move(“Position”). However, in order to be
consistent, SYSMAC-SCS ensures, that (1 to 10) of the above operations, will work for any
connection to any provider, (but you need to bear in mind when designing applications that
use ‘Forward-Only’ cursors, that there may be some ‘long-winded’ acrobatics being
performed behind the scenes). See DBSupports() for details of how to check the type of
cursor in force.
Example:
Reads a set of records from a Recordset to the associated point(s). This function operates
on both Recordset and Field levels. At the Field level the associated column values from
the Recordsets current position will be copied into the Point (number of elements copied =
number of elements in the Point, no paging applies at the Field level). At the Recordset
level all the associated columns from the Recordset will be copied into the relevant Points.
(1 page of values will be copied).
The second parameter in this function ‘ResetCursor’ is optional, where the default value is
true i.e. the ‘current record’ is reset to the start of the records just read. This option is useful
if the read operation is being combined with a subsequent Write operation i.e. you can read
in a set of records, make modifications to some of the fields and then Write the changes
back to the Recordset. A value of false will leave the current position at the start of the next
set of records, this option can be of benefit, if the Provider only supports forward moving
cursors, or you simply want to step through the records a page at a time.
Example:
Page 19 of 47
2.5.3.5 Bool = DBWrite([String]Level, [[Bool]ResetCursor=TRUE])
Writes (or more specific overwrites) a set of records into a Recordset from the associated
point(s). This function operates on both Recordset and Field levels. At the Field level the
associated values from the point are written into the Recordsets starting at the current
position. (number of elements written = number of elements in the Point). At the Recordset
level all the associated points values from the Points will be written into the Recordset
starting at the current record. (1 page of values will be written for each Point).
Note: This function will fail, if the Recordset is opened with a Lock of ‘Read Only’. Use
Pessimistic or Optimistic locks as appropriate.
Example:
Issues commands to read schema results or properties or set up new schema criteria. This
function operates only at a Schema level. The following commands are available:
Note: If no ‘Page Number’ is supplied, this function will return page 1 when first called and
automatically return the next page of schemas for each subsequent call, cycling back to the
beginning when all pages have been returned.
“Set” takes three parameters for Schema ‘Name’, ‘Criteria’ and ‘Filter’.
Examples:
Page 20 of 47
DBSchema(“Invoice.Data Types”, “Set”, “Columns”, “COLUMN_NAME”, “”)
Returns TRUE if the specified level is in the requested State. This function operates on the
Connection and Recordset levels. There are two states that can be requested, namely
“Open” and “Closed”.
Example:
Returns TRUE if the specified Recordset supports the requested operation. This function
operates on the Recordset level only. The following support operations can be queried:
“AddNew”
“Bookmark”
“Delete”
“Find”
“MovePrevious” If false then ‘Forward-Only’ cursor movements are supported
“Update” ‘Write is an update operation
Example:
Returns the requested property, this function operates on the Recordset and Field levels.
The type of the value returned depends on the property requested, the ‘Database Function’
dialog provides assistance when adding a DBProperty function to a script, by filtering the
correct type of point in the point browse dialog for the selected property. A description of
the properties available and their respective types are shown below:
Note: The Recordset will only return valid properties when it is Open.
Page 21 of 47
Recordset Properties
Field Properties
Example:
Adds a new field to a record in a Recordset, because records consist of multiple fields, the
operation of adding a new record is multi-stage. This is achieved by combining the
DBAddNew() function with the DBUpdate() function. The first stage in adding a new record
to a Recordset, is to add all the required fields in the record by calling DBAddNew() for each
field and then call DBUpdate() to complete the operation. The DBAddNew function works
on the Recordset and Field levels.
At the Recordset level the whole operation is automatic i.e all fields (with property type
‘Add’) associated with the Recordset are added via AddNew() and the DBUpdate() function
is called for you. The Recordset must be configured to perform this type of operation i.e it
will need to contain fields for any primary keys and ‘non null’ values required to create a new
record, Points associated with the ‘Add’ property can be array points, thus enabling you to
add multiple records in one operation.
Page 22 of 47
Result = DBAddNew(“Northwind.Order Details”)
At the Field level the you must call the DBAddNew() function for each field and then call the
DBUpdate() function to complete the operation as shown below:
DBAddNew(“Northwind.Order Details.OrderID”)
DBAddNew(“Northwind.Order Details.ProductID”)
DBAddNew(“Northwind.Order Details.Quantity”)
DBAddNew(“Northwind.Order Details.UnitPrice”)
DBUpdate(“Northwind.Order Details”)
At any stage before the DBUpdate() function is called, this operation may be cancelled by
calling the DBExecute() command “CancelUpdate”.
Note: Only Fields with a property type of ‘Value’ can be added to a Recordset. The value(s)
of the associated points at the time DBUpdate() is called will be used to create the record.
This function will fail if the Recordset is opened with a lock of ‘Read Only’.
Completes a AddNew() sequence of operations, see 2.5.3.10 above, this function works
only at the Recordset level.
Deletes the specified number of records from the current record position. This function
works only at the Recordset level. This function will fail if the Recordset is opened with a
lock of ‘Read Only’.
DBMove(“First”)
DBDelete(“Northwind.Order Details”, 10)
Returns the last error string generated by the Database provider, The second parameter is
optional flag set to TRUE, if set this function call will display the Error Message in a
Message Box as well as returning the string, if FALSE no message box will be displayed.
This function works only at the Connection level.
Page 23 of 47
Returns: [String] the providers error message..
Example:
Connection Level
Recordset Level
Examples:
Page 24 of 47
‘ Find the next record satisfying the specified criteria, starting from the current position
Notes on Find: If a record is found the Execute function returns the record number of the
record found or –1 if not found, also if not found the current record is set to EOF.
Valid search criteria include: “ProductName LIKE ‘G*’ ” wildcard search finds all records
where ProductName starts with ‘G’, “Quantity = 5”, “Price >= 6.99”. Only single search
values are allowed, using multiple values with ‘AND’ or ‘OR’ will fail.
‘ Apply a filter to display only records with a company name ‘United Package’:
Page 25 of 47
3 Database Logging
It is possible to log data directly to an existing Database table in a similar manner to ‘DLV’
logging. To achieve this, a new object called “DbLink” has been added to the “Logging”
view of the Workspace Editor. DbLinks are used in conjunction with a Database connection
to provide Database logging. DbLinks use the existing DLV functionality to provide a means
of specifying the required expression handling and timings, but instead of logging to a DLV
file, the data is routed to a Database connection, where it is added in the form of new
records to an existing Database table.
Note: The ADO interface used to access Data Sources does not provide any mechanism for
creating Databases or Tables, therefore unlike DLV logging, it is not possible to
automatically create a data source. Unpopulated data sources for use in Database Logging,
must first be created using the specific s/w for your choice of data source i.e. “Access”.
1. Create an ‘unpopulated’ data source or ‘template’ for use in Database logging.(or any
existing table can be used)
3. A DbLink is created in the Workspace Logging view, this link is associated with the
connection created in step 2, and configured with the required timings and expressions
used to create data for the fields that make up a record in the template table.
A more detailed description of the three stages is given below by using a an Access
Database as a working example:
The following access database “DbLogging.mdb” has been created for use as a template in
this example, with a single Table called “Results”. This table contains a column
representing each of the SCS data types and a column to record the time each record is
logged. The ID column is the Tables primary key created automatically by Access.
Page 26 of 47
3.2 Configuring a Connection
• Add a Recordset for the Results table, ensuring that a Read/Write lock is selected.
• Add a Field for each column of data that makes up a record, ensuring that the ‘Field
Property is set to ‘Add’ as shown below:
The Connection is now complete and available for use in the logging process.
Move to the Workspace Editor’s Logging View and select the right mouse menu option ‘Add
‘Db Link… this will invoke the following dialog:
Page 27 of 47
A name for the link is automatically selected for you and the Connection and Recordset
combo boxes are populated with any Connections already configured in the Database View,
for this example the “DBLogging” connection and “Results” recordset are selected. The
‘Sample Rate’ group enables you to determine what type of logging is required either On
Change or On Interval, the above example shows the default On Interval rate with period of
30 seconds. A check box is available to determine whether or not the logging is
automatically started when the Application starts. After selecting the ‘OK’ button a new
‘DBLink1’ node will be created in the Logging View.
Click on the ‘DBLink’ node and select the right mouse menu option ‘Add Db Field’ to invoke
the following dialog:
A name for the ‘DbField’ is selected automatically for convenience (this can be modified to a
more meaningful name), and the Field Link Combo box is populated with the Field names
already present in the “DbLogging.Results” connection. The Expression edit box defines the
point name or expression that will be logged for selected field.
Note: The ‘Dead Band’ and ‘Trigger on change of value’ fields are disabled, as they only
relevant to ‘On Change’ logging which will be explained later in the document.
Adding a Db Field for each of the fields used in this example where the expression for Time
is “$Time” and the other values are simply simulated value changes, will produce a Logging
View as shown below,:
Page 28 of 47
The working example is now complete. Running the application will cause the Connection
to the “DbLogging” database to be automatically opened, and every 30 seconds the
expressions for Bool, Integer, Real, Text and Time will be evaluated and a new record
created using these values and then added to the Results table as shown below:
Configuring for ‘On Change’ logging is simply a matter of selecting the Change radio button
in the DBLink dialog. When adding a DBField to an On Change DbLink the ‘Dead Band’ (if
relevant for the Data Type) and ‘Trigger on change of value’ mentioned earlier are now
enabled as show below:
Page 29 of 47
The ‘Trigger on change of value’ needs a bit of explaining because it is important in
determining when a new record is created.
For example using the “Results” table in the working example, this table is made up of
several fields i.e. Time, Bool, Integer, Real, and Text. Then the following sample script:
Integer = Integer + 1
Real = Real + 10.5
If Real > 100 then
Text = “High”
else
Text = “Low”
endif
makes changes to the values of some of the fields that make up a “Results” record. If a
new record, is created, every time a single field changes, then executing the above script
would cause 3 new records to be added to the Table, which is probably not the desired
effect. It would be better to wait until all three values have changed and then create one
new record, which includes all the above changes.
This is function of the ‘Trigger’ checkbox, by selecting the ‘Trigger’ “On” for the fields
‘Integer’, ‘Real’ and ‘Text’ and ‘Trigger’ “Off” for the remaining fields. Then a new record will
only be created when a change of value is received for each of the Fields with the Trigger
“On” therefore running the above script will cause one new record to be created.
However some care needs to be taken when determining which fields should be used as
Triggers, i.e. in the above script, if Real > 100 and the value of Text is already set to “High”
then Text will not actually change its value and subsequently no Trigger will occur.
In this event no loss of data occurs because the changes in values for Integer and Real are
stored and the fields are marked as pending, the stored data will be logged when a
subsequent change occurs to either of these values. Care also needs to taken if Dead
Bands are applied to fields with the Trigger “On”.
Page 30 of 47
1. A Field with the Trigger “On” will be marked as pending when it receives a
change.
2. A new record will not be created until either:
• a) All fields with Trigger “On” have received a change event (i.e. are
pending)
• Or
• b) A field marked as pending receives a subsequent change event.
3. All pending flags are cleared after a new record is created.
4 No action is taken if a Field with the Trigger “Off” receives a change in value.
5 Expressions for Fields with the Trigger “Off” will be evaluated at the time a new
record is created.
Records can be “Time Stamped” or “Date Stamped” by adding a field with an expression of
“$Time” or “$Date” as in the working example field ‘Time’. Note: this field has its Trigger
“Off” otherwise a new record would be created every second (see rule 2 (b)), however,
because of rule 5. the expression $Time is evaluated at the time the record is created
The following Logging Script functions are available for use with DbLinks:
• OpenLogFile() Opens the associated Connection and Recordset ready for use.
• CloseLogFile() Closes the associated Connection and Recordset.
• StartLogging() Performs an OpenLogFile() operation if the Connection is closed and
enables logging to the Database.
• StopLogging() Disables logging to the Database.
4 Datashaping
This section describes how to use the ADO SHAPE command syntax to produce
hierarchical recordsets.
Hierarchical recordsets present an alternative to using JOIN syntax when accessing parent-
child data. Hierachical recordsets differ from a JOIN in that with a JOIN, both the parent
table fields and child table fields are represented in the same recordset. With a hierarchical
recordset, the recordset contains only fields from the parent table. In addition, the recordset
contains an extra field that represents the related child data, which you can assign to a
second recordset variable and traverse.
Hierachical recordsets are made available via the MSDataShape provider, which is
implemented by the client cursor engine.
Page 31 of 47
NOTE:
• By default, the child recordsets in the parent recordset will be called Chapter1,
Chapter2, etc., unless you use the optional [[AS] name] clause to name the child
recordset.
• You can nest the SHAPE command. The {parent-command} and/or {child-command}
can contain another SHAPE statement.
4.1 Examples
Some example shape commands using the Northwind Database are listed below:-
Which yields:
Customers.*
rsOrders
|
+----Orders.*
In the previous diagram, the parent recordset contains all fields from the Customers table
and a field called rsOrders. rsOrders provides a reference to the child recordset, and
contains all the fields from the Orders table. The other examples use a similar notation.
This sample illustrates a three-level hierarchy of customers, orders, and order details:
Which yields:
Customers.*
rsOrders
|
+----Orders.*
rsDetails
|
+----[Order Details].*
Page 32 of 47
SUM(ExtendedPrice) AS OrderTotal
Which yields:
Orders.*
rsDetails
|
+----orderid
ExtendedPrice
OrderTotal
Group Hierarchy::
Which yields:
rsOrders
|
+----cust_id
Orders.*
cust_id
NOTE: The inner SHAPE clause in this example is identical to the statement used in the
Hierarchy with Aggregate example.
SHAPE
(SHAPE {select customers.*, orders.orderid, orders.orderdate
from customers inner join orders
on customers.customerid = orders.customerid}
APPEND ({select od.orderid,
od.unitprice * od.quantity as ExtendedPrice
from [order details] as od} AS rsDetails
RELATE orderid TO orderid),
SUM(rsDetails.ExtendedPrice) AS OrderTotal) AS rsOrders
COMPUTE rsOrders,
SUM(rsOrders.OrderTotal) AS CustTotal,
ANY(rsOrders.contactname) AS Contact
BY customerid
Which yields:
rsOrders
|
+----Customers.*
orderid
orderdate
rsDetails
|
+----orderid
ExtendedPrice
OrderTotal
CustomerTotal
Contact
customerid
Multiple Groupings::
Page 33 of 47
SHAPE
(SHAPE{select customers.*,
od.unitprice * od.quantity as ExtendedPrice
from (customers inner join orders
on customers.customerid = orders.customerid) inner join
[order details] as od on orders.orderid = od.orderid}
AS rsDetail
COMPUTE ANY(rsDetail.contactname) AS Contact,
ANY(rsDetail.region) AS Region,
SUM(rsDetail.ExtendedPrice) AS CustTotal,
rsDetail
BY customerid) AS rsCustSummary
COMPUTE rsCustSummary
BY Region
Which yields:
rsCustSummary
|
+-----Contact
Region
CustTotal
rsDetail
|
+----Customers.*
ExtendedPrice
customerid
Region
Grand Total::
SHAPE
(SHAPE{select customers.*,
od.unitprice * od.quantity as ExtendedPrice
from (customers inner join orders
on customers.customerid = orders.customerid) inner join
[order details] as od on orders.orderid = od.orderid}
AS rsDetail
COMPUTE ANY(rsDetail.contactname) AS Contact,
SUM(rsDetail.ExtendedPrice) AS CustTotal,
rsDetail
BY customerid) AS rsCustSummary
COMPUTE SUM(rsCustSummary.CustTotal) As GrandTotal,
rsCustSummary
Note the missing BY clause in the outer summary. This defines the Grand Total because the
parent rowset contains a single record with the grand total and a pointer to the child
recordset.
GrandTotal
rsCustSummary
|
+-----Contact
CustTotal
rsDetail
|
+----Customers.*
ExtendedPrice
customerid
SHAPE
Page 34 of 47
(SHAPE {select * from customers}
APPEND ((SHAPE {select orders.*, year(orderdate) as OrderYear,
month(orderdate) as OrderMonth
from orders} AS rsOrders
COMPUTE rsOrders
BY customerid, OrderYear, OrderMonth)
RELATE customerid TO customerid) AS rsOrdByMonth )
AS rsCustomers
COMPUTE rsCustomers
BY region
Which yields:
rsCustomers
|
+-----customers.*
rsOrdByMonth
|
+-----rsOrders
|
+---- Orders.*
customerid
OrderYear
OrderMonth
region
The following working example demonstrates how to create three level hierarchy of
‘customers’, ‘orders’ and ‘order details’ by using the following shape command:
First create a file DSN (called DataShape.dsn) specifying the Northwind database as the
data source. Add a connection to Database workspace and enter the following connection
string:
Page 35 of 47
Add a Recordset named ‘Customers to the DataShape connection and enter the above
shape command in the SQL Text field as follows: (note the last line of the command is not
visible on this screen shot)
After successfully adding a Datashape recordset it is now possible to add a Child Recordset
to the Recordset ‘Customers’ by selecting the right menu option ‘Add Recordset’ which will
now be enabled, this will invoke the following dialog:
The name field is automatically filled in, this can be modified to a more suitable name i.e.
‘Orders’, if required. If the connection is ‘Live’ a list of valid child recordset names will be
entered in the Source ComboBox.
Note: Field associations can be added to Child recordsets in a similar manner to normal
recordsets and also child recordsets can be added to child recordsets. as shown in the
Workspace View below where ‘Orders’ is a child recordset of ‘Customers’ and ‘Details is a
child recordset of ‘Orders’:
Page 36 of 47
4.2.4 Working with Child Recordsets
Child recordsets can be accessed via Script command in a similar manner to normal
recordsets i.e
Note: child recordsets are not supported in the Database function dialog.
Page 37 of 47
Appendix A
Data Providers and Drivers
The initial set of Providers and driver names supplied with ADO are:
Directory Services For resource data stored, such as Active Directory, this will become
more important when NT5.0 is available. (Driver = ADSDSOObject)
ODBC Drivers For existing ODBC Drivers, this ensures that legacy data is not
omitted. (Driver = MSDASQL)
Oracle Native Oracle driver simplifies access to existing Oracle data stores.
(Driver = MSDAORA)
Data Shape For hierarchical recordsets, this allows the creation of master/detail
type recordsets, which allow drilling down into detailed data. (Driver =
MSDataShape).
Simple Provider For creating your own providers for simple text data. (Driver =
MSDAOSP)
The above is just the list of standard providers supplied by Microsoft, other vendors are
actively creating their own.
Connection Strings
Listed below are some example connections strings for the above providers:
DSN “DSN=data_source_name”
FILEDSN “FILEDSN=filename.dsn”
Page 38 of 47
Appendix B
Data Source Name (DSN)
A file data source name (DSN) stores information about a database connection in a file.
The file has the extension .dsn and by default is stored in the the “$\Program Files\Common
Files\ODBC\Data Sources” directory. This type of file can be viewed with a suitable text
editor e.g. “Notepad”.
From your Windows ‘Control Panel’, select the ‘ODBC Data Sources’ icon, this will bring up
the ODBC Data source Administrator dialog box. Any data sources already defined will be
listed.
The following example creates a File DSN for the Microsoft Access Database
‘Northwind.mdb’:
Click on Add to create a new data source, this will invoke the Create New Data Source
dialog box with a list of available drivers (only drivers that are installed on your machine will
be shown).
Page 39 of 47
Choose the driver for which you are adding a new data source (in this case .mdb), and
select ‘Next’, you will then be prompted to name your Data Source, choose a suitable name
(in this case ‘My Example DSN’) and select ‘Next’, this will bring up the following dialog:
Selecting Finish will complete this part of the operation. You will then be prompted for
details of the database you wish to connect to via the following dialog:
Page 40 of 47
Press ‘Select…’ to choose your required .mdb database file
After selecting a database file press OK on the ‘Select Database’ and ‘ODBC Microsoft
Access Setup’ dialogs to complete the operation. A file named ‘<My Example DSN.dsn>’
will now exist in the Data Sources directory, which can be used to connect to the Northwind
database. One advantage of using this method over specifying the full path of the
database, is that the DSN file remains unchanged while its contents can be re-configured to
reflect any changes in directory or database file name etc.
Page 41 of 47
Appendix C
Schemas
Schemas return information about the data source, such as information about the tables on
the server and the columns in the tables. A Schema uses a Schema Type and a Criteria to
determine the information to be returned. The Criteria argument is an array of values that
can be used to limit the results of a schema query. Each schema query has a different set of
parameters that it supports. The actual schemas are defined by the OLE DB specification.
The ones supported in ADO are listed in the Tables below.
Note: Providers are not required to support all of the OLE DB standard schema queries.
Specifically, only ‘Schema Tables’, ‘Schema Columns’, and ‘Schema Provider Types’ are
required by the OLE DB specification. However, the provider is not required to support the
Criteria constraints listed above for those schema queries.
CONSTRAINT_CATALOG
Schema Asserts CONSTRAINT_SCHEMA
CONSTRAINT_NAME
CHARACTER_SET_CATALOG
Schema Character Sets CHARACTER_SET_SCHEMA
CHARACTER_SET_NAME
CONSTRAINT_CATALOG
Schema Check Constraints CONSTRAINT_SCHEMA
CONSTRAINT_NAME
COLLATION_CATALOG
Schema Collations COLLATION_SCHEMA
COLLATION_NAME
DOMAIN_CATALOG
DOMAIN_SCHEMA
Schema Column Domain Usage
DOMAIN_NAME
COLUMN_NAME
TABLE_CATALOG
TABLE_SCHEMA
TABLE_NAME
Schema Column Privileges
COLUMN_NAME
GRANTOR
GRANTEE
TABLE_CATALOG
Schema Columns
TABLE_SCHEMA
TABLE_NAME
Page 42 of 47
COLUMN_NAME
TABLE_CATALOG
TABLE_SCHEMA
Schema Constraint Column Usage
TABLE_NAME
COLUMN_NAME
TABLE_CATALOG
Schema Constraint Table Usage TABLE_SCHEMA
TABLE_NAME
PK_TABLE_CATALOG
PK_TABLE_SCHEMA
PK_TABLE_NAME
Schema Foreign Keys
FK_TABLE_CATALOG
FK_TABLE_SCHEMA
FK_TABLE_NAME
TABLE_CATALOG
TABLE_SCHEMA
Schema Indexes INDEX_NAME
TYPE
TABLE_NAME
CONSTRAINT_CATALOG
CONSTRAINT_SCHEMA
CONSTRAINT_NAME
Schema Key Column Usage TABLE_CATALOG
TABLE_SCHEMA
TABLE_NAME
COLUMN_NAME
PK_TABLE_CATALOG
Schema Primary Keys PK_TABLE_SCHEMA
PK_TABLE_NAME
PROCEDURE_CATALOG
PROCEDURE_SCHEMA
Schema Procedure Columns
PROCEDURE_NAME
COLUMN_NAME
PROCEDURE_CATALOG
PROCEDURE_SCHEMA
Schema Procedure Parameters
PROCEDURE_NAME
PARAMTER_NAME
PROCEDURE_CATALOG
PROCEDURE_SCHEMA
Schema Procedures
PROCEDURE_NAME
PROCEDURE_TYPE
Page 43 of 47
DATA_TYPE
Schema Provider Types
BEST_MATCH
CONSTRAINT_CATALOG
Schema Referential Constraints CONSTRAINT_SCHEMA
CONSTRAINT_NAME
CATALOG_NAME
Schema Schemata SCHEMA_NAME
SCHEMA_OWNER
TABLE_CATALOG
Schema Statistics TABLE_SCHEMA
TABLE_NAME
CONSTRAINT_CATALOG
CONSTRAINT_SCHEMA
CONSTRAINT_NAME
Schema Table Constraints TABLE_CATALOG
TABLE_SCHEMA
TABLE_NAME
CONSTRAINT_TYPE
TABLE_CATALOG
TABLE_SCHEMA
Schema Table Privileges TABLE_NAME
GRANTOR
GRANTEE
TABLE_CATALOG
TABLE_SCHEMA
Schema Tables
TABLE_NAME
TABLE_TYPE
TRANSLATION_CATALOG
Schema Translations TRANSLATION_SCHEMA
TRANSLATION_NAME
OBJECT_CATALOG
OBJECT_SCHEMA
OBJECT_NAME
Schema Usage Privileges
OBJECT_TYPE
GRANTOR
GRANTEE
VIEW_CATALOG
Schema View Column Usage VIEW_SCHEMA
VIEW_NAME
Page 44 of 47
VIEW_NAME
TABLE_CATALOG
Schema Views TABLE_SCHEMA
TABLE_NAME
Page 45 of 47
Appendix D
Page 46 of 47
Appendix E
Extensible Markup Language is a text-based format that lets developers describe, deliver
and exchange structured data between a range of applications. XML allows the
identification, exchange, and processing of data in a manner that is mutually understood,
using custom formats for particular applications if needed.
XML resembles and complements HTML. XML describes data, such as city name,
temperature and barometric pressure, and HTML defines tags that describe how the data
should be displayed, such as with a bulleted list or a table. XML, however, allows
developers to define an unlimited set of tags, bringing great flexibility to authors, who can
decide which data to use and determine its appropriate standard or custom tags.
<EmployeeList>
<Entry>
<Employee>John Jones</Employee>
<Phone>555-1213</Phone>
<Type>Mobile</Type>
</Entry>
<Entry>
<Employee>Sally Mae</Employee>
<Phone>555-1217</Phone>
<Type>Business Fax</Type>
</Entry>
</EmployeeList>
You can use an application with a built in XML parser, such as Microsoft® Internet Explorer
5 to view XML documents in the browser just as you would view HTML pages.
Page 47 of 47