Niagara AX - Drivers Guide
Niagara AX - Drivers Guide
Confidentiality Notice
The information contained in this document is confidential information of Tridium, Inc., a Delaware corporation (“Tridium”). Such
information, and the software described herein, is furnished under a license agreement and may be used only in accordance with
that agreement.
The information contained in this document is provided solely for use by Tridium employees, licensees, and system owners; and,
except as permitted under the below copyright notice, is not to be released to, or reproduced for, anyone else.
While every effort has been made to assure the accuracy of this document, Tridium is not responsible for damages of any kind,
including without limitation consequential damages, arising from the application of the information contained herein. Information
and specifications published here are current as of the date of this publication and are subject to change without notice. The latest
product specifications can be found by contacting our corporate headquarters, Richmond, Virginia.
Trademark Notice
BACnet and ASHRAE are registered trademarks of American Society of Heating, Refrigerating and Air-Conditioning Engineers.
Microsoft, Excel, Internet Explorer, Windows, Windows Vista, Windows Server, and SQL Server are registered trademarks of
Microsoft Corporation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Mozilla and Firefox are trademarks
of the Mozilla Foundation. Echelon, LON, LonMark, LonTalk, and LonWorks are registered trademarks of Echelon Corporation.
Tridium, JACE, Niagara Framework, NiagaraAX Framework, and Sedona Framework are registered trademarks, and Workbench,
WorkPlaceAX, and AXSupervisor, are trademarks of Tridium Inc. All other product names and services mentioned in this publica-
tion that is known to be trademarks, registered trademarks, or service marks are the property of their respective owners.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Document Change Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
NiagaraAX-3.7u1
i
Drivers Guide
June 5, 2013
NiagaraAX-3.7u1
ii
Drivers Guide
June 5, 2013
NiagaraAX-3.7u1
iii
Drivers Guide
June 5, 2013
NiagaraAX-3.7u1
iv
Drivers Guide
Chapter 2 –
June 5, 2013
niagaraDriver-NiagaraScheduleDeviceExt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–8
niagaraDriver-NiagaraScheduleImportExt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–8
niagaraDriver-NiagaraStation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–8
niagaraDriver-NiagaraStationFolder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–8
niagaraDriver-NiagaraSysDefDeviceExt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–8
niagaraDriver-NiagaraSystemHistoryExport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–8
niagaraDriver-NiagaraSystemHistoryImport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
niagaraDriver-NiagaraTuningPolicy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
niagaraDriver-NiagaraTuningPolicyMap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
niagaraDriver-NiagaraUserDeviceExt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
niagaraDriver-NiagaraVirtualDeviceExt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
niagaraDriver-ProviderStation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
niagaraDriver-RoleManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
niagaraDriver-SyncTask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
Components in niagaraVirtual module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
niagaraVirtual-NiagaraVirtualBooleanPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–10
niagaraVirtual-NiagaraVirtualBooleanWritable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–10
niagaraVirtual-NiagaraVirtualCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–10
niagaraVirtual-NiagaraVirtualCachePolicy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–10
niagaraVirtual-NiagaraVirtualComponent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–10
niagaraVirtual-NiagaraVirtualDeviceExt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–10
niagaraVirtual-NiagaraVirtualEnumPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–10
niagaraVirtual-NiagaraVirtualEnumWritable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–10
niagaraVirtual-NiagaraVirtualNumericPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–10
niagaraVirtual-NiagaraVirtualNumericWritable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–11
niagaraVirtual-NiagaraVirtualStringPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–11
niagaraVirtual-NiagaraVirtualStringWritable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–11
Components in serial module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–11
serial-SerialHelper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–11
Components in tunnel module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–11
tunnel-TunnelService . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–11
tunnel-SerialTunnel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–12
tunnel-TunnelConnection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–12
NiagaraAX-3.7u1
2-v
Drivers Guide
Chapter 2 –
June 5, 2013
NiagaraAX-3.7u1
2-vi
Drivers Guide
PREFACE
Preface
Document Change Log
NiagaraAX-3.7u1
vii
Drivers Guide
June 5, 2013
NiagaraAX-3.7u1
viii
Drivers Guide
CHAPTER 1
Driver architecture
In any NiagaraAX station, one or more driver networks are used to fetch and model real-time data values.
Real-time values are modeled with proxy points, lower-tier components in that driver’s architecture. In
addition, (depending on the driver/devices) other data items “native” in devices may also integrated into
the station, such as schedules, alarms, and data logs (histories).
To support proxy points and other modeled data, the station must have that driver’s network architecture.
The following main topics apply to common driver architecture:
• What are networks?
• About Network architecture
• About the Driver Manager
• Common network components
• About the Device Manager
• Common device components
• Virtual gateway and components
• Types of device extensions
• About the Point Manager
• About Histories extension views
• About Schedules extension views
NiagaraAX-3.7u1
1–1
Drivers Guide
About Network architecture Chapter 1 – Driver architecture
Network component hierarchy June 5, 2013
Note: By default, most stations include a NiagaraNetwork. It may be used to model data in other NiagaraAX
stations. Also, it contains the NiagaraFoxService required for Workbench-to-station communications. For
more details, see “Niagara Network” on page 2-1.
Note: By convention, you should keep all driver networks located under the station’s Drivers container—it is a
special component with a view that provides some utility. See “About the Driver Manager” on page 1-4 for
more details.
To simplify driver modeling, the New Station wizard automatically creates the necessary “Drivers”
container, complete with a NiagaraNetwork (and its required component slots).
• If engineering a JACE station, you invariably add additional driver networks, opening the driver’s
palette and copying the network-level component into Drivers. Or, you can use the New button in
the Driver Manager. Examples are a LonNetwork or BacnetNetwork.
• If engineering a Supervisor (PC) station, the NiagaraNetwork may be the only one needed. Option-
ally, you may add a “database” network (providing the PC host is licensed for it). And, if a PC host
licensed as a direct “integration Supervisor” (e.g. BACnet Supervisor), you may add a driver network
(e.g. BacnetNetwork) directly into its Drivers container.
Note: Regardless of host platform, in most cases the network component is the only item you need to manually
copy from that driver’s palette. After that, you use “manager views” to add child components like devices
and proxy points, even if programming offline.
NiagaraAX-3.7u1
1–2
Drivers Guide
Chapter 1 – Driver architecture About Network architecture
June 5, 2013 About network components
This enforces the correct component hierarchy under that driver as you engineer, and helps coordinate
mechanisms used by the driver for communications. Exceptions to this rule are noted (as applicable) in
other documentation about NiagaraAX drivers.
Note: Depending on the specific driver, there may be additional slots in the DriverNetwork. For example, if a field
bus network, there may be “PollScheduler” and/or “CommConfig” slots. Some of these may require proper
configuration before driver communications occur. See “Additional network components” on page 1-13.
The DriverNetwork is also the parent container for all device-level components. Devices list in tabular
format in the default view for that network, the DeviceManager. See “About the Device Manager” on page
1-14. You use the Device Manager to create and manage device components, and (if the driver provides
this), use discover features.
The DriverDevice is also the parent container for all device extensions. As shown in Figure 1-5, device
extensions (e.g. “Points”) are visible under the device when you expand it in the nav tree.
NiagaraAX-3.7u1
1–3
Drivers Guide
About the Driver Manager Chapter 1 – Driver architecture
About device extensions June 5, 2013
By default, each driver network lists showing its name, a type description, a real-time status field, whether
it is enabled, and a fault cause field (empty, unless the network is in fault).
Within the Driver Manager view, network-level operations are available:
• Double-click a network to go to its Device Manager view. See “About the Device Manager” on page
1-14.
• Right-click a network to see its popup menu, including available actions. Available actions will vary
by driver, but may include “Ping,” “Upload,” and “Download” as examples.
Buttons at the bottom of the Driver Manager provide other functions. See the section “Driver Manager
New and Edit” on page 1-4.
NiagaraAX-3.7u1
1–4
Drivers Guide
Chapter 1 – Driver architecture Common network components
June 5, 2013 Driver Manager New and Edit
• Edit
Note: New and Edit are also on the Driver Manager toolbar and the Manager menu.
New
The New button in the Driver Manager allows you to add a new driver network to the station. This is
equivalent to copying a driver network component from that driver’s palette. The New dialog provides a
selection list for network type, and also the number of networks to add (Figure 1-7).
Note: Currently, you are allowed to add multiples of any driver network, as well as any network type (regardless
of the station’s host platform). However, please understand that many networks should be “one-only” per
station, e.g. a NiagaraNetwork and BacnetNetwork. Also, consider that networks shown in the selection list
reflect only the modules available on your Workstation, and may not be present (or supported) on the
target NiagaraAX host.
Edit
The Edit button in the Driver Manager allows you to rename a selected network, or to set it to disabled
(Figure 1-8).
Caution Whenever you set a network to disabled (Enabled = false), the network component and all child driver
components (all devices, proxy points, etc.) change to disabled status. If a field bus driver, communications
routines like polling and device status pings also suspend. By default, disabled status color is gray text on
a light gray background.
This network-wide action may be useful for maintenance purposes. Note that Enabled is also individually
available at the device-level and proxy point-level, using the Edit button/dialog in the Device Manager or
Point Manager, respectively.
A disable at the device-level disables all child (proxy points), as well as polling to that points under that
device. A disable at the proxy-point level disables the single point.
NiagaraAX-3.7u1
1–5
Drivers Guide
Common network components Chapter 1 – Driver architecture
Network status properties June 5, 2013
Enabled
By default, network Enabled is true—you can toggle this in the property sheet, or in the Driver Manager
(by selecting the network and using the Edit button). See related Caution on page 5.
Health
Network Health contains historical properties about the relative health of the network in the station,
including historical timestamps.
NiagaraAX-3.7u1
1–6
Drivers Guide
Chapter 1 – Driver architecture Common network components
June 5, 2013 About Monitor
Alarm Source Info properties work the same as those in an alarm extension for a control point. For
property descriptions, see the User Guide section “About alarm extension properties”.
Note: Each child device object also has its own Alarm Source Info slot, with identical (but independently
maintained) properties. See “Device Alarm Source Info” on page 1-22.
About Monitor
A network’s Monitor slot holds configuration for the “ping mechanism” used by the driver network. In
the network’s property sheet, expand the Monitor slot to see configuration (Figure 1-11).
Monitor provides verification of the general health of the network, plus the network’s “pingables”
(typically, devices) by ensuring that each device is minimally “pinged” at some repeating interval.
• If a device responds to the monitor ping, device status is typically “ok,” and normal communications
routines to it (proxy-point polling, plus reads of device schedules, trends, etc. if supported by the
driver) proceeds normally. Typically, this applies even if the device returns an error response to the
ping, because this indicates that the device is “alive.”
• If a device does not respond to the monitor ping, it is marked with a down status—this causes normal
communications routines to that device to be suspended. Upon the next successful monitor ping to
that device, device status typically returns to “ok” and normal communications routines resume.
Note: Whenever successful communications occur to a device, that device component’s Health property is
updated with the current time. The network ping Monitor will only “ping” the device if the time of last
health verification is older than the ping frequency. Therefore, in normal operation with most drivers, the
proxy point polling mechanism actually alleviates the need for the monitor ping, providing that the ping
frequency is long enough. Also, in most drivers if a point poll request receives no response (not even a “null”)
from a device, a “ping fail” condition is immediately noted, without waiting for the monitor ping interval.
The following sections provide more Monitor details:
• Monitor properties
• Monitor considerations by driver
Monitor properties
The monitor ping properties are as follows:
• Ping Enabled
• If true, (default) a ping occurs for each device under the network, as needed.
• While set to false, device status pings do not occur. Moreover, device statuses cannot change
from what existed when this property was last true.
Note: It is recommended you leave Ping Enabled as true in almost all cases.
• Ping Frequency
Specifies the interval between periodic pings of all devices. Typical default value is every 5 minutes
(05m 00s), you can adjust differently if needed.
• Alarm On Failure
• If true (default), an alarm is recorded in the station’s AlarmHistory upon each ping-detected de-
vice event (“down” or subsequent “up”).
• If false, device “down” and “up” events are not recorded in the station’s AlarmHistory.
• Startup Alarm Delay
Specifies the period a station must wait after restarting before device “down” or “up” alarms are gen-
erated. Applies only if the Monitor’s property Alarm On Failure is true.
NiagaraAX-3.7u1
1–7
Drivers Guide
Common network components Chapter 1 – Driver architecture
About Tuning Policies June 5, 2013
Note: Some driver networks do not have Tuning Policies, for example an RdbmsNetwork for a database driver.
Also, the NiagaraNetwork has greatly simplified Tuning Policies.
By default, a driver’s TuningPoliciesMap contains just a single TuningPolicy (“Default Policy”). However,
you typically create multiple tuning policies, changing those items needed differently in each one. Then,
when you create proxy points under a device in that network, you can assign each point (as needed) to
the proper set of “rules” by associating it with a specific tuning policy.
Caution Using only a single (default) tuning policy, particularly with all property values at defaults, can lead to
possible issues in many driver (network) scenarios. In general, it is recommended that you create multiple
tuning policies, and configure and use them differently, according to the needs of the network’s proxy points
and the capabilities of the driver. In particular, tuning policy properties that specify writes from Niagara
should be understood and applied appropriately. See Tuning Policy properties for more details.
As a simple example (under a BacnetNetwork), you could change the default tuning policy’s “Write On
Start” property from the default (true) to false. Then, duplicate the default tuning policy three times,
naming the first copy “Slow Policy”, the second copy “Normal with Write Startup”, and the third copy
“Fast Policy”. In two of those copies, change the “Poll Frequency” property from “Normal” to “Slow” or
“Fast”, corresponding to its name. In the “Normal with Write Startup” tuning policy copy, you could
change its “Write On Start” property back to true.
Then, only the “Normal with Write Startup” tuning policy has “Write On Start” set as true. At this point
you would then have 4 available (and different) tuning policies to pick from when you create and edit
proxy points, where you could selectively apply the policy needed.
The following sections provide more Tuning Policy details:
• Tuning Policy properties
NiagaraAX-3.7u1
1–8
Drivers Guide
Chapter 1 – Driver architecture Common network components
June 5, 2013 About Tuning Policies
NiagaraAX-3.7u1
1–9
Drivers Guide
Common network components Chapter 1 – Driver architecture
About poll components June 5, 2013
Note: Stale time is recommended to be specified at least three times the expected poll cycle time.
Most peer-to-peer networks do experience collisions and missed messages. Setting the stale time short
will likely produce nuisance stale statuses. If a message is missed for some reason, then another poll
cycle time or two is allowed for the message to be received before setting the stale flag.
• Poll Frequency
(May not exist in some driver’s tuning policies, but is instead a separate property of each ProxyExt)
Applies to all proxy points. Provides a method to associate the tuning policy with one of 3 Poll Rates
available in the network’s Poll Service: Fast Rate, Normal Rate, or Slow Rate. The default poll fre-
quency is “Normal.”
Note: Depending on the driver, there may be a single “Poll Service” (or “Poll Scheduler”) slot under
the network, or as in the case of a BacnetNetwork, a separate “Poll Service” for each configured port
(IP, Ethernet, Mstp) under its BacnetComm > Network container. The NiagaraNetwork uses subscrip-
tions instead of polling.
Tuning Policy considerations by driver
Tuning policies used by a specific driver may have unique characteristics. For example, under a Niagar-
aNetwork, its TuningPolicy has only three properties: Stale Time, Min Update Time, and Max Update Time. For
more details, see “NiagaraStation component notes” on page 2-13.
Other drivers may have specific considerations for tuning policies. For more information, refer to the
“Tuning Policy Notes” section within any NiagaraAX driver document.
NiagaraAX-3.7u1
1–10
Drivers Guide
Chapter 1 – Driver architecture Common network components
June 5, 2013 About poll components
the slow rate can actually be configured for a faster interval than the normal rate, without issues.
• “Pollables” (mainly proxy points) are associated with one of three “rate” groups (Slow, Normal, Fast)
via either:
• assigned Tuning Policy in each point’s proxy extension (BacnetNetwork)
• assigned Poll Frequency in each point’s proxy extension (most drivers other than Bacnet)
In the case of device objects, a Poll Frequency property is used to select the rate directly.
• The poll scheduler maintains a group of four rate “buckets” to service “pollables”, three of which cor-
respond to these configured poll rates (slow, normal, fast).
A fourth “dibs stack” bucket is allocated for pollables that transition to a subscribed state. This may
be a “temporary subscription,” such as results when viewing unlinked proxy points (without alarm
or history extensions) in Workbench or a browser. Or, it may occur when a permanently subscribed
point is first polled (thereafter, it is no longer “dibs stack-polled”).
• Every 10 seconds, the poll scheduler rebuilds the list of objects that are assigned to each of the poll
buckets. An algorithm is used to break up the list of objects for each poll bucket into optimally-sized
groups, which allows the poll thread to switch back and forth between the rate buckets. This algo-
rithm is designed to spread the message traffic out evenly over the configured intervals, and to allow
points assigned to quicker buckets to update multiple times before points assigned to a slower buck-
et update once.
• Poll statistics are updated every 10 seconds. Fast, slow, and normal cycle times display the average
time in milliseconds (ms) to complete a single poll cycle. The poll scheduler algorithm automatically
calculates an inter-message delay time to evenly spread poll messages out over the configured rate.
For example, if there are 5 points assigned to a normal rate/Tuning Policy, then it may poll a point
in the list every 2 seconds. In this case, the normal poll cycle time would be around 10000 ms, but
that does not mean that actually took 10 seconds to actually poll those 5 points.
Priority of polling by the scheduler occurs in this fashion:
1.Dibs stack. When first subscribed, a pollable moves to the top of the dibs stack (first dibs). The poll
scheduler always polls the dibs bucket before doing anything else. The dibs stack is polled last-in,
first-out (LIFO). As long as entries are in the dibs stack, they are polled as fast as possible with no
delays.
2. When the dibs stack is empty, the scheduler attempts to poll the components in each “rate” bucket
using an algorithm designed to create uniform network traffic.
For example, if the fast rate is configured to 10 seconds and there are 5 components currently sub-
scribed in the fast bucket, then the scheduler will attempt to poll one component every 2 seconds.
Note: Every ten seconds the poll scheduler rechecks the buckets for configuration changes. So if a pollable’s config-
uration is changed from slow to fast, it takes at most ten seconds for the change to take effect.
You can manually reset poll statistics using the Reset Statistics action to the Poll Service (Figure 1-14).
NiagaraAX-3.7u1
1–11
Drivers Guide
Common network components Chapter 1 – Driver architecture
About poll components June 5, 2013
• Normal Rate
Target poll interval for pollables assigned to this rate group (default often is 5 seconds).
• Slow Rate
Target poll interval for pollables assigned to this rate group (default often 30 seconds).
Note: All remaining properties are read-only statistics properties.
• Statistics Start
Timestamp reflecting either the last manual reset of poll statistics, or if statistics have not been reset,
the first “steady state” time immediately following the last station restart.
• Average Poll
Average time spent during each poll event. This does not relate to the total time required to complete
a poll cycle for any of the three rate buckets. It is the time spent polling a given group before pausing
and switching to another group of objects, either in the same or a different poll rate bucket.
• Busy Time
Displays a percentage of time spent by the poll thread actually polling points, across all poll buckets.
Includes (in parentheses) a ratio of “(time spent polling/total time since statistics were restarted)”.
Given a small amount of time is spent transitioning between poll buckets, and with the thread sleep-
ing to evenly space out polling messages, it is unlikely to ever see Busy Time reach exactly 100%.
However, any percentage above 95% indicates that the poll thread is basically spending all of its time
actually polling. Also see “Using poll statistics in tuning poll rates” on page 1-13.
Note: In the case of the Poll Service for a Bacnet network port, because two threads are used for
polling, it is possible to see a Busy Time approaching 200%. In this case, divide the Busy Time in half
to get an average busy time for each thread.
• Total Polls
Total number of polls performed, along with the time spent actually polling. Note that this time is
the same time indicated in the ratio of the Busy Time property. Typically, the total poll count indi-
cates the number of times any object has been polled. It is not a running total of the actual poll cycles.
Total Polls, Example You have 2 points assigned to a fast policy of 1 second, 2 points assigned to a
normal policy of 5 seconds, and 2 points assigned to a slow policy of 10 seconds. When the statistics
update every 10 seconds, you would expect the total polls to increment by 26, where:
Total polls = (2 fast/1 sec * 10 sec) + (2 normal/5sec * 10 sec) + (2 slow/10 sec * 10 sec)
Total polls = 20 fast + 4 normal + 2 slow
Total polls = 26
Note: In some cases, such as a BacnetNetwork, Total Polls may indicate the number of poll messages
sent to devices. Potentially, there could be multiple points being processed in a single message, such if
performing “read property multiple” for BACnet devices, or possibly when performing COV subscrip-
tions.
• Dibs Polls
Percentage and ratio of the number of dibs polls versus the total polls.
• Fast Polls
Percentage and ratio of the number of fast polls versus the total polls.
• Normal Polls
Percentage and ratio of the number of normal polls versus the total polls.
• Slow Polls
Percentage and ratio of the number of slow polls versus the total polls.
• Dibs Count
Current and average number of components in dibs stack.
• Fast Count
Current and average number of components in fast queue.
• Normal Count
Current and average number of components in normal queue.
• Slow Count
Current and average number of components in slow queue.
• Fast Cycle Time
Average cycle time of the fast queue.
• Normal Cycle Time
Average cycle time of the normal queue.
• Slow Cycle Time
Average cycle time of the slow queue.
NiagaraAX-3.7u1
1–12
Drivers Guide
Chapter 1 – Driver architecture Common network components
June 5, 2013 Using poll statistics in tuning poll rates
Note: Depending on driver, additional statistics properties may be present in a Poll component. For example, a
BacnetPoll has additional properties Device Count, Point Count, and Object Count. See the specific driver
document for unique Poll Service features or notes.
NiagaraAX-3.7u1
1–13
Drivers Guide
About the Device Manager Chapter 1 – Driver architecture
Additional network components June 5, 2013
See the driver document for specific details on configuring a network’s communications components.
About driver-specific properties
Depending on the driver, other specific properties (or even component containers) may be found in the
network component’s property sheet.
For example, the NiagaraNetwork has container components Fox Service and History Policies, a
ModbusAsyncNetwork contains a number of properties for timeouts and byte-order conventions, and a
LonNetwork contains a LonNetmgt component with various child properties (Figure 1-16).
For more details see “Niagara Network” on page 2-1, or the corresponding driver document for driver-
specific properties and components of a network.
NiagaraAX-3.7u1
1–14
Drivers Guide
Chapter 1 – Driver architecture About the Device Manager
June 5, 2013 Device New Folder and New
Following station configuration, this view provides a status and configuration summary for all devices
networked using that driver. The “Exts” column provides quick double-click access to device extensions,
such as Points for proxy points, Schedules for imported schedules, and so forth (see “Types of device
extensions” on page 1-26 for details). You can also use it to issue actions to selected devices, such as “ping,”
upload, and so forth.
At the bottom of the view, buttons “New Folder” and “New” let you create new device folders and devices
in the station database. The “Edit” button lets you edit one or more selected devices in the station
database. If the driver supports online “device learning,” buttons “Discover,” and “Match” are also typically
available. Finally, additional buttons may be available, depending on the driver.
For more details, see:
• Device New Folder and New
• All Descendants
• Device Edit
• About Device Discover, Add and Match (Learn Process)
• Manager table features
When you click OK, a new DeviceFolder component is added under the network.
Note: A driver’s DeviceFolder component is different than a “normal” Folder component, as it provides that
driver’s Device Manager view by default (just like the parent network component). Also, an “All
Descendants” command is available from the root Device Manager, which allows you to see all devices
in the network. For details, see “All Descendants” on page 1-16.
NiagaraAX-3.7u1
1–15
Drivers Guide
About the Device Manager Chapter 1 – Driver architecture
All Descendants June 5, 2013
New
The New button in the Device Manager allows you to add a new device component to the station. This
is equivalent to copying a device component from that driver’s palette. The New dialog provides a
selection list for device type, and also the number of devices to add (Figure 1-19).
When you click OK, a new device-level component is added under the network. You usually need to edit
specific details for each new device, including addressing parameters for communications.
All Descendants
When in the Device Manager of a device folder, it is “unaware” of devices in other device folders (or in the
root of the network component). However, from the root Device Manager view (of the network), you can
“flatten” the folder structure to view all devices by selecting it from the Manager menu (Figure 1-20) or
by simply clicking the “All Descendants” tool on the toolbar (Figure 1-21).
Note: Be aware that in a large network, with many device folders and devices, using the all descendants feature
may decrease performance. This might result from so many devices/points being subscribed.
Note: If you are using device folders, and you click on a table column to resort devices, please be aware that any
device-to-device folder organization is lost during that view. However, you can always see contents of device
folders clearly in the Nav tree, and again when returning to the Device Manager view from another view.
NiagaraAX-3.7u1
1–16
Drivers Guide
Chapter 1 – Driver architecture About the Device Manager
June 5, 2013 Device Edit
Device Edit
In the Device Manager view, you can edit any device component in the station database by simply double-
clicking it. (Editing does not apply to device folders.)
Edit
The Edit dialog appears with the device listed (Figure 1-22).
Note: The Edit dialog for a device component only shows some of the device component’s properties. Typically,
these are key properties required for communications, and will vary among drivers (to access all properties
of the device component, go to its property sheet).
When a single device is selected in the Edit dialog, you can edit any property except Type (it was fixed when
you added the device). Included is the ability to edit the Name of the device in the station. This is equivalent
to the right-click Rename command on the component.
The following related topics also apply:
• Device “gang” edits
• Manager table features
Device “gang” edits
The Edit button in the Device Manager allows you to edit one or more device components in the station
in a single dialog. Before clicking Edit, use standard Windows keyboard controls to highlight (select)
multiple devices (e.g. hold down Ctrl key and click desired devices).
Note: Edit is also available on the Device Manager toolbar and the Manager menu.
The “gang” edit feature is useful for making the same change in multiple (selected) devices. For example,
as shown in Figure 1-23, you can change “Poll Frequency” in multiple devices at the same time (versus
editing this individually).
NiagaraAX-3.7u1
1–17
Drivers Guide
About the Device Manager Chapter 1 – Driver architecture
About Device Discover, Add and Match (Learn Process) June 5, 2013
When you have the Edit dialog open with multiple devices, you can also click on select ones to make
individual edits (e.g. Name, or any other editable property), during the same dialog session where you
made “gang” property edits. When you click OK, all the changes are applied to the device components as
made during your Edit session. For more details, see “About gang edits.”
Note: When you have multiple devices selected in the Edit dialog, properties that must be unique (such as Name)
are automatically unavailable (dimmed). However, note that some properties that typically should be
unique (often address properties) may still be available for “gang edit,” as this rule is not automatically
enforced. Typically, when editing these properties, you should verify only a single device component (row)
is highlighted in the table.
Note: Whenever in Learn Mode of any Manager view (DeviceManager, PointManager, and so forth) you can drag
the border between the two panes to resize, as necessary.
NiagaraAX-3.7u1
1–18
Drivers Guide
Chapter 1 – Driver architecture About the Device Manager
June 5, 2013 About Device Discover, Add and Match (Learn Process)
Discover
The Discover button is available in the Device Manager only if the driver supports online discovery.
When you click Discover, the Device Manager view splits into two panes (Learn Mode), and at the same
time typically launches a discovery “job” (Figure 1-25).
Note: In some drivers, an intermediate dialog may appear before the discovery job begins. For example, in a
discover for a BacnetNetwork, a “Configure Device Discovery” dialog allows you to limit possible ranges of
devices, before discovery begins. For more details, see the Bacnet Users Guide.
The two panes in the Device Manager operate as follows:
• Discovered (top pane)
Lists devices discovered on the driver’s network, as candidates. Any device found that already exists
in the station database appears “ghosted” (faintly listed). A job “progress bar” is also included on top
of the Discovered pane.
Note: A Cancel button is available during a discovery job. If needed, use it to stop discovery.
• Database (bottom pane)
Lists devices and device folders that are currently in the station database.
Add
The Add button is available in the Device Manager in Learn Mode when you have one or more devices
selected (highlighted) in the top Discovered pane. When you click Add, an Add dialog appears that allows
you to edit items before the device component(s) are created in the station database (Figure 1-26).
NiagaraAX-3.7u1
1–19
Drivers Guide
About the Device Manager Chapter 1 – Driver architecture
Match (Device) June 5, 2013
The Add dialog is nearly identical to the device Edit dialog, but allows you to edit Type as well as other
device properties. Often, device address properties in the Add dialog already have acceptable values for
operation (otherwise, communications to that device would not have occurred). Often, you change only
Name, unless you know other settings you wish to change. You can always Edit the device component(s)
after you click OK and add them to your station.
Note: When you have one or more devices selected in the top Discovered pane, an Add tool is also available on
the toolbar (“plus” symbol), as well as a command in the Manager menu. Also, you can simply double-click
a discovered device to bring it up in the Add dialog.
Match (Device)
Device Match is a feature that may be useful when you have an application replicated many times at the
device level, or if you have programmed offline using the New device feature.
• In the first case (replicated device application), you could discover and add one typical device, and
complete further engineering under it (learning and adding proxy points, point extensions, creating
other control logic, adding Px views, including all self-contained links and bindings).
Then, you could duplicate that typical device component (choosing Duplicate in its right-click
menu) for as many identical devices as exist. The Match feature now allows you to match each du-
plicated device component to a unique discovered device, saving engineering time. This repopulates
the necessary properties of the duplicated device object with the correct values from the discovered
device.
• In the second case (offline programming) where a connection to the actual device network is un-
available, you can manually add New devices and begin station engineering of a driver network. Typ-
ically, most component creations under a driver network are possible (including all levels) using the
New feature in the various “manager” views (Device Manager, Point Manager, other device exten-
sion managers). Or, you can add saved applications (from the device level on down) and edit as nec-
essary. Then, when online with the driver network later, you could use Match to “sync” to existing
components (device-level, proxy points, and so forth).
The Match button in the Device Manager becomes available when in Learn Mode, and you have:
1. Selected one device candidate in the top (Discovered) pane.
2. Selected one existing device component in the bottom (Database) pane.
Note: In this case, the toolbar also has an available Match tool, and the Manager menu has a Match command.
When you click Match, the Match dialog appears, as shown in Figure 1-27.
Note: Match is strictly a “one-to-one” function for “discovered-to-database”—note that it is unavailable any time
you have multiple items selected either in either the top or bottom pane.
The Match dialog is nearly identical to the single-device Edit dialog, and typically provides the
“discovered” device address values. Often, most properties in the Match dialog already have acceptable
device address values required for operation (otherwise, communications to the discovered device would
not have occurred). You can always Edit the device component after you click OK and add it to your
station.
NiagaraAX-3.7u1
1–20
Drivers Guide
Chapter 1 – Driver architecture Common device components
June 5, 2013 Manager table features
NiagaraAX-3.7u1
1–21
Drivers Guide
Common device components Chapter 1 – Driver architecture
Device status properties June 5, 2013
Status of devices is verified by successful polling, or the device status “ping” as configured in the network
component’s Monitor configuration. See “About Monitor” on page 1-7.
From the Device Manager view, you can also right-click a device, and from the popup menu select
Actions > Ping to manually verify communications.
Depending on conditions, a device may have one of various status flags set, including fault or disabled, or
others in combination, such as down, alarm, stale, and/or unackedAlarm. In the Device Manager, non-
ok status in indicated for any device by row color other than white.
Enabled
By default, device Enabled is true—you can toggle this in the property sheet, or in the Device Manager
(by selecting the device and using the Edit button). See Caution on page 5.
Health
Device Health contains historical properties about the last successful message received from the device,
including timestamps.
NiagaraAX-3.7u1
1–22
Drivers Guide
Chapter 1 – Driver architecture Virtual gateway and components
June 5, 2013 Device address properties
These properties work the same as those in an alarm extension for a control point. For property descrip-
tions, see the User Guide section “About alarm extension properties”.
Note: Each parent network component also has its own Alarm Source Info slot, with identical (but independently
maintained) properties. See “About network Alarm Source Info” on page 1-6.
In the case of a Lon device (DynamicDevice) various address parameters are stored under a DeviceData
component, itself visible when you expand the device in the Nav tree. For more details on device-level
address properties, see the specific driver document.
Poll Frequency
Common to most devices is a Poll Frequency property, typically located below device address properties.
It provides a drop-down menu to select among 3 poll frequencies, as configured in the network’s Poll
Service. For more details, see “About poll components” on page 1-10.
NiagaraAX-3.7u1
1–23
Drivers Guide
Virtual gateway and components Chapter 1 – Driver architecture
About virtual component spaces June 5, 2013
This is a logical division of gateways (separating data by its source device). However, note that future
virtual component applications may not always use this “one-to-one-device” hierarchy for virtual
gateways. At some future point, a driver may have a “network level” virtual gateway, or possibly provide
multiple virtual gateways under the same device level or network level.
Gateway activation
Unlike with most device extensions, there is no special view for a virtual gateway—in Workbench you can
simply double-click it to access the gateway’s property sheet, or expand it in the Nav tree. When you do
this for a device’s virtual gateway, a “driver-specific” call is made to the device to gather data, where the
results appear as child virtual components.
In the case of the BACnet driver, expanding a virtual gateway fetches “an object list,” where each BACnet
object in the device appears as a virtual component (slot) under the gateway. You can further expand each
property of an object to see its “virtual properties,” including a value and poll status. See Figure 1-34.
NiagaraAX-3.7u1
1–24
Drivers Guide
Chapter 1 – Driver architecture Virtual gateway and components
June 5, 2013 Application and limitations of virtual components
In this case, each “virtual property” component is a virtual point. For further details about the BACnet
implementation, see the “About Bacnet virtual points” section in the NiagaraAX BACnet Guide. For
details about virtual components in the NiagaraNetwork of a Supervisor, see “About Niagara virtual
components” on page 2-37.
NiagaraAX-3.7u1
1–25
Drivers Guide
Types of device extensions Chapter 1 – Driver architecture
Virtual ord syntax June 5, 2013
The only persisted (permanent) record of such a virtual component is its ord in the Px binding to it, which
uses a “Virtual ord syntax” that includes the virtual gateway within it (and activates it at runtime). This
ord is automatically resolved when you drag a virtual component onto the Px page, and make your
selection in the popup Px “Make Widget” dialog, as shown in the Bacnet example in Figure 1-35.
Figure 1-35 Dragging Bacnet virtual component into Px editor view, with resulting Make Widget popup
Depending on driver, Px usage details may differ for virtual components. For example, usage of Niagara
virtual components in Px views includes special considerations. For details, see “Niagara virtuals in Px
views” on page 2-46.
For complete details on Px views and widget editing, see the “About Px Editor” section in the User Guide.
NiagaraAX-3.7u1
1–26
Drivers Guide
Chapter 1 – Driver architecture About the Points extension
June 5, 2013 About proxy points
Perhaps the most important of all device extensions is the Points extension—the container for all proxy
points (representing real-time data originating from that device).
Common types of Device extensions include:
• Points extension—see “About the Points extension” on page 1-27.
• Histories extension—see “About the Histories extension” on page 1-34.
• Alarms extension—see “About the Alarms extension” on page 1-35.
• Schedule extension—see “About the Schedules extension” on page 1-36.
Note: The NiagaraStation component (device in a NiagaraNetwork) has additional device extensions,
including a “Users” extension, and starting in AX-3.5, a “Files” extension and “Sys Def” extension.
For more details, see “NiagaraStation component notes” on page 2-13.
These values are “proxied” using NiagaraAX control points, or proxy points. Values can be both read from
data values in that device, and written to value stores in the device.
Note: You create all proxy points using the Point Manager view of Points—simply double-click Points under any
device component, or right-click Points and select Views > Point Manager. See “About the Point
Manager” on page 1-37 for more details.
For general information on control points, refer to “About control points” in the User Guide. See the next
section “About proxy points” for further details on proxy points.
NiagaraAX-3.7u1
1–27
Drivers Guide
About the Points extension Chapter 1 – Driver architecture
About proxy points June 5, 2013
As needed, you can create folders under a device’s Points container to further organize the proxy points.
In addition to proxy points, you can also add simple control points (null proxy ext), schedule objects, and
other kitControl objects under a device’s Points container.
Note: See also “Location for kitControl components” in the kitControl Guide.
Depending on station type, proxy point locations will vary as follows:
• JACE station
Most proxy points are under non-Niagara driver networks, such as Lonworks, Bacnet, and Modbus,
to name a few. These points represent real-time data in various devices attached (or somehow net-
worked) to the JACE controller.
Like all AX stations, a JACE typically has a Niagara Network too. Any proxy points under it will rep-
resent data received from other JACE stations and/or perhaps the Supervisor station.
• Supervisor station
Typically, most proxy points are under its Niagara Network, where each JACE appears as a station
device. Proxy points under each station device represent real-time data from that JACE. Typically,
most data in a JACE station also originates as a proxy point, and so these points are often a “proxy
of a proxy.” For details, see “About the Niagara Network” on page 2-1.
Note: If the Supervisor is specially licensed for direct field device communications, such as a BACnet
Supervisor or OPC Supervisor, its station will have many proxy points under other driver network
types, and perhaps only a few under its NiagaraNetwork. In Drivers architecture, it resembles a JACE.
For more details, see “About Network architecture” on page 1-2.
How proxy points are made
Note: You create proxy points in a station as one of the later stages in building a driver network. This section only
provides an overview of the proxy points portion.
When you add a device under a network, one of its default extensions is a container named “Points.” The
default view for Points is a “Point Manager,” which you use to add proxy points to the station database.
An example Bacnet device Points Manager is shown in Figure 1-39.
NiagaraAX-3.7u1
1–28
Drivers Guide
Chapter 1 – Driver architecture About the Points extension
June 5, 2013 About proxy points
Typically, you have performed your previous station configuration of this network with the host (e.g.
JACE) networked and communicating to the devices of interest. This allows you to use the “Learn Mode”
feature (provided by most drivers). This feature is especially useful for adding proxy points to the station.
In the Point Manager, the Learn Mode toggles the view between only proxy points that are currently in
the station (“Database”), and a split view showing both a “Discovered” area and the “Database” area. See
“About Learn toggle” on page 1-18 for more details.
Clicking the Discover button launches a “point discovery job.” The driver queries the selected device
to retrieve available data. Depending on a number of factors (driver type, communications rate, amount
of data), this can take from only a few seconds to over a minute. See Figure 1-40.
When the discover completes, the “Discovered” view portion provides a table of available data items.
Each row represents at least one item (a candidate for one proxy point). If there are multiple, closely-
related data items, that row appears with a leading plus (“+”). You can expand it to see other available data
items. Again, each row is a candidate for one proxy point.
Depending on the driver type, table column headings vary—for example a Bacnet points discover shows
“Object Name,” “Object ID,” etc., in the Discovered table (see Figure 1-41).
NiagaraAX-3.7u1
1–29
Drivers Guide
About the Points extension Chapter 1 – Driver architecture
About proxy points June 5, 2013
As needed, you click on column headers to resort and/or use the scroll bar to view available discovered
data items. After selecting one or more items by clicking on rows (to highlight), you can click the Add
button to start the proxy point creation process. The Add dialog appears (Figure 1-42).
The Add dialog allows you to select each data item and change things about its default point creation
before it is added to the station database. Most important is “Type,” meaning the control point type—as
unlike other things (such as Name) you cannot change that after the creation. Apart from Name and
Type, most other properties are proxy extension properties.
Selecting the Type drop-down, alternate point types are available for selection, see Figure 1-43. If the
driver recognizes the data item as writable, this will include writable point types.
NiagaraAX-3.7u1
1–30
Drivers Guide
Chapter 1 – Driver architecture About the Points extension
June 5, 2013 About proxy points
Typically, you do not change the data category type (Boolean, Numeric, etc.), but you may wish to select
either a read-only point or a writable point.
You click any discovered data item to select it for changing Type, or any other property. If a common
change is needed among data items (for example, Poll Frequency), you can select multiple items and edit
that property one time.
When you are satisfied with the point types shown for each item, you click the OK button at the bottom
of the Add dialog. Those proxy points are then created in the station database in the Points container, and
now appear as table rows in the “Database” (lower) table of the Point Manager view. The source data
items now appear “ghosted” in the “Discovered” (upper) table, to indicate that they now exist in the
station database. See Figure 1-44.
You may repeat this process as needed to create additional proxy points, where you can toggle the display
of the Discovered portion off and on by clicking the Learn Mode tool.
Once in the Database table of the Points Manager, you can click to select (highlight) a proxy point, then
modify it using the Edit button, or simply double-click it. This produces an Edit dialog nearly identical
to the Add dialog, where you can change the same proxy extension properties and Name (but not Type).
You can also select multiple proxy points and use the Edit dialog to “gang edit” one or more of the same
properties that are common to each selected point.
In the Database portion, you can right-click a proxy point to access its normal views, including property
sheet. There, you can expand its Proxy Ext to see many of the same properties you saw in the Add and
Edit dialogs. Also, you can right-click a proxy point in the Database table to access any available point
actions (if a writable point).
Each Points container has other views besides the default Point Manager. For example, you can select its
Wire Sheet view to see and organize the proxy points glyphs (Figure 1-38 on page 28), add additional
objects from palettes control and/or kitControl, and so forth.
Note: The following notes apply when working with proxy points.
• In a Niagara Network, the Discover and Add process is different than in other driver networks.
There, you use a “BQL Query Builder” to select data items in a remote station. For more details, see
NiagaraAX-3.7u1
1–31
Drivers Guide
About the Points extension Chapter 1 – Driver architecture
ProxyExt properties June 5, 2013
“About the Niagara Network” on page 2-1 and “About the Bql Query Builder” on page 2-20.
• Any Point Manager view only shows proxy points. If you added other objects (for example, control
points with null proxy extension, or kitControl objects), they do not appear in the Database table.
However, they are visible in the Nav tree and the Points wire sheet.
• If you want folders under Points to organize your proxy points and other objects, use the “New Fold-
er” button in the Point Manager to create each folder. This provides a Point Manager view for each
folder. For more details, see “About the Point Manager” on page 1-37.
Proxy points versus simple control points
Functionally, there is little difference between a simple control point (NullProxyExt) and the equivalent
proxy point. For example, you can add the same extensions (e.g. control, alarm, and history) to a proxy
point as to a simple control point—there is no need to “duplicate” the point first.
However, apart from the location differences (see “Location of proxy points” on page 1-28) and manner
of creation (see “How proxy points are made” on page 1-28), proxy points have the following differences
from simple control points:
• Status flag processing
Status flags of proxy points are affected by real-time changes that may occur in the remote device,
plus changes in communications between that device and the station. This is in addition to “AX-add-
ed” status flags set by an alarm extension (for example, “alarm” or “unackedAlarm”). See “Proxy
point status” on page 1-33.
Related are “global” status rules common among NiagaraAX drivers, set at both the network-level as
well as adjustable at the proxy-point level. For details, see “About Tuning Policies” on page 1-8.
• Point duplication
When you duplicate a proxy point, you are duplicating information in its Proxy Ext that might be
better left unique. This may result in redundant messages between the device and station for the
same data item, adding overhead. If duplicating a proxy point, you should have a clear idea of what
you will change to prevent this inefficiency.
ProxyExt properties
Regardless of the driver, the proxy extension (ProxyExt) for any proxy point has a number of core
properties that provide the same behavior. Depending on the driver, other ProxyExt properties are often
present—see the particular driver document for details on those properties.
Core properties in any ProxyExt include the following:
• Status
(read only) Status of the proxy extension, which in turn determines parent point status. See “Proxy
point status” on page 1-33.
• Fault Cause
(read only) If point has fault status, provides text description why.
• Enabled
Either true (default) or false. While set to false, the point’s status becomes disabled and polling is sus-
pended.
• Device Facets
(read only) Native facets used in proxy read or writes (Parent point’s facets are used in point status
display, and are available for edit in Add and Edit dialogs in Point Manager).
• Conversion
Specifies the conversion used between the “read value” (in Device Facets) and the parent point’s out-
put (in selected point facets).
Note: In most cases, except when working with particular Ndio proxy points, or perhaps Modbus
proxy points, the standard “Default” conversion selection is best.
Conversion selections include:
• Default
(The default selection). Conversion between “similar units” is automatically performed within
the proxy point. For example, if a Lon proxy point with a LonFloatProxyExt (Device Facets of
degrees C), if you set the point’s Facets to degrees F, its output value automatically adjusts.
If you set the parent point’s Facets to dissimilar units (say, in this case to kilograms), the parent
point has a fault status to indicate a configuration error.
• Linear
Applies to certain Ndio proxy points, such as NdioVoltageInput and NdioResistiveInput. Also
commonly used in some Modbus proxy points. For these points, you typically want the point’s
NiagaraAX-3.7u1
1–32
Drivers Guide
Chapter 1 – Driver architecture About the Points extension
June 5, 2013 Proxy point status
output value in some units other than Device Facets (if Ndio, voltage or resistance), and the Lin-
ear selection provides two fields to make the transition:
– Scale: Determines linear response slope.
– Offset: Offset used in output calculation.
• Reverse Polarity
Useful in a Boolean-type Ndio proxy point to reverse the logic of the hardware binary input. Al-
so, may be used in any driver’s proxied BooleanPoint, as a way to “flip” the read state of the na-
tive value.
Note: Be careful in the use of the reverse polarity conversion, as it may lead to later confusion
when troubleshooting logic or communications problems.
• Thermistor Type 3
Applies to an NdioThermistorInput point, where this selection provides a “built-in” input re-
sistance-to-temperature value response curve for Type 3 Thermistor temperature sensors.
• Tabular Thermistor
Applies to an NdioThermistorInput point, where this selection provides a control for a popup
dialog for a custom resistance-to-temperature value response curve, including ability to import
and export response curves.
• Tuning Policy Name
(most drivers) Associated network tuning policy for this proxy point, by name. Tuning policies are
used to separate poll rates and other driver features. See “About Tuning Policies” on page 1-8.
• Read Value
(read only) Last value read from the device, expressed in device facets.
• Write Value
(read only) Applies if writable point only. Last value written, using device facets.
NiagaraAX-3.7u1
1–33
Drivers Guide
About the Histories extension Chapter 1 – Driver architecture
About the Retry Trigger June 5, 2013
Note: If you change a proxy point’s facets to use units that are dissimilar from the device facets in its ProxyExt
(without also changing its Conversion from “Default”), the proxy point will have a fault status (to show
misconfiguration). For example, if its device facets are temperature, and you change the point’s facets to
pressure, the point will have a fault status.
For details on properties common to the ProxyExt in all proxy points, including Device Facets and
Conversion, see “ProxyExt properties” on page 1-32.
A device’s Histories extension serves as the parent container for history descriptors—components
that specify how history-type collections (log data) are imported or exported. By default, it also contains
a Retry Trigger, see “About the Retry Trigger” on page 1-34.
The difference between a history import and history export are as follows:
• Import
Any imported history appears in the local station as a Niagara history, where data is effectively
“pulled” from a remote source device.
• For example, if a history import descriptor under Histories of a NiagaraStation device, the
source is another Niagara history in that remote station.
• If a history import descriptor under Trend Logs of a BacnetDevice, the source is a BACnet
Trend Log object residing in that BACnet device.
• Export
Applies to a NiagaraStation (or “database device” under an RdbmsNetwork). An exported history is
a Niagara history that exists in the local station, and is configured by a history export descriptor to
“push” its data to a remote NiagaraStation (or RDBMS database). This adds it to the histories in that
remote station (or to an RDBMS database).
Note: Under a BacnetNetwork, the single “LocalDevice” component (that represents station data
exported to BACnet) has a special view on its child Export Table component that permits
exporting station histories as BACnet Trend Log objects. See the BACnet Guide section “Bacnet server
configuration overview” for more details.
You create history import descriptors and export descriptors under the Histories extension of a device
using separate views. For details, see “About Histories extension views” on page 1-46.
NiagaraAX-3.7u1
1–34
Drivers Guide
Chapter 1 – Driver architecture About the Alarms extension
June 5, 2013 Alarms extension properties
Upon any unsuccessful execution of a descriptor component (i.e. history import descriptor, schedule
import descriptor, and so on) contained in that same device extension, the Retry Trigger defines
subsequent “retries” that will automatically occur upon the interval period defined (default value is 15
minutes). This continues until successful execution occurs.
NiagaraAX-3.7u1
1–35
Drivers Guide
About the Schedules extension Chapter 1 – Driver architecture
Alarms extension properties June 5, 2013
• Source Name
Available in the Alarms extension of a NiagaraStation, so that the name of the sending station can
be added to the received alarm’s “alarm source” name, or another format be selected. This affects the
alarm’s display (versus routing).
The available choices are:
• Prepend — (default) To add leading text (as specified) to the incoming “alarmSource” field
string, where the default format is %parent.parent.displayName%:
For example, an alarm in the remote station named “EastWing” for an alarm record with its lo-
cal alarm source of “Room101”, the resulting alarm source would look like EastWing:Room101
• Append — To add trailing text (as specified) to the incoming “alarmSource” field string. For
example, if the format entered is -%parent.parent.displayName%
And the alarm in the remote station named “EastWing” is for an alarm record with its (local)
alarm source of “Room101”, the resulting alarm source would be Room101-EastWing
• Use Existing — Only uses the incoming “alarmSource” field string (ignores whatever for-
mat text is entered in this property).
• Replace — Only uses the format text entered in this property as alarm source (ignoring what-
ever incoming “alarmSource” string is received).
• Last Send Time
Timestamp when last local alarm was routed to this device. This is configured under the local sta-
tion’s AlarmService with a corresponding StationRecipient or BacnetDestination component linked
to one or more AlarmClass components).
• Last Send Failure Time
Timestamp when last local alarm routed to this station could not be sent.
• Last Send Failure Cause
Text string describing failure cause routing local alarm to this station.
A device’s Schedules extension is the parent container for imported schedules—Niagara schedule compo-
nents with a ScheduleImportExt. Events in an imported schedule are obtained from that device, and are
read-only (often called “slave schedules”). By default, the Schedules extension also contains a Retry
Trigger, see “About the Retry Trigger” on page 1-34.
The Schedules extension can also contain schedule export descriptors. These correspond to local station
schedules that are exported (“pushed”) to that remote station (often called “master schedules”). A
schedule export descriptor is automatically created whenever a local schedule is “imported” into a remote
station.
The difference between a schedule import and schedule export are as follows:
• Import
An imported schedule appears in the local station as a Niagara schedule, where read-only schedule
events are configured/adjusted in a remote source device.
• If under a NiagaraStation device, the source is a Niagara schedule in that remote station.
• If under a BacnetDevice, the source is a BACnet Schedule or Calendar object residing in that BACnet
NiagaraAX-3.7u1
1–36
Drivers Guide
Chapter 1 – Driver architecture About the Point Manager
June 5, 2013 Alarms extension properties
When building a network in the station, you use this view to create, edit, and delete proxy points in the
station database. See “About proxy points” on page 1-27.
Following station configuration, this view provides a status summary for proxy points. You can also use
it to issue an override action to a writable proxy point, e.g. “Active,” “Off,” and so on.
Note: Only proxy points appear in the Point Manager view—any other components that may also reside under
Points do not appear. For example, you do not see kitControl or schedule components, or any control point
with a “null proxy extension.” However, you can use other views of Points (wire sheet, property sheet, slot
sheet) to access these items.
At the bottom of the view, buttons “New Folder” and “New” let you create new point folders and proxy
points in the station database. An “Edit” button lets you edit one or more selected proxy points. If the
driver supports online “device learning,” buttons “Discover,” and “Match” are also typically available.
Finally, additional buttons may be available, depending on the driver.
NiagaraAX-3.7u1
1–37
Drivers Guide
About the Point Manager Chapter 1 – Driver architecture
Points New Folder and New June 5, 2013
When you click OK, a new PointFolder component is added under Points. Point folders are often useful,
especially if you are creating many proxy points, or need extra wire space for additional kitControl or
schedule components (and whatever links you wish to create between them).
Note: A devices’s PointFolder component is different than a “normal” Folder component, because it includes that
devices’s Point Manager view as its default view (just like the parent Points component). You can double-
click a point folder (either in the Point Manager view pane or in the Nav tree), to access its Point Manager.
Also, an “All Descendants” command is available from the root Points Manager, which allows you to see all
proxy points in that device.
All Descendants When in the Point Manager of a point folder, it is “unaware” of proxy points in other
point folders (or in the root of Points). However, from the root Point Manager view (of Points), you can
“flatten” the folder structure to view all proxy points by selecting “All Descendants” from the Manager
menu (Figure 1-50) or simply clicking the “All Descendants” tool on the toolbar (Figure 1-51). Note that
“all descendants” is also available at any point folder (level) as well—you just see all points in descendants
from that folder level on down.
Note: Be aware that in a device with many point folders and proxy points, using the all descendants feature
(particularly at the Points root level) may decrease performance. This might result from so many points
becoming subscribed.
NiagaraAX-3.7u1
1–38
Drivers Guide
Chapter 1 – Driver architecture About the Point Manager
June 5, 2013 Point Edit
Note: If you are using point folders, and you click on a table to column to resort points, please be aware that any
proxy point-to-point folder organization is lost during that view. However, you can always see contents of
point folders clearly in the Nav tree, and again when returning to the Point Manager view from another
view.
New
New in the Point Manager is to add new proxy points to the station. Typically, you use New only if the
driver does not support online discovery, or if you are programming offline. Under any of the Modbus
drivers, for example, you use New to add proxy points.
Using New is equivalent to copying proxy points from that driver’s palette, except it provides more utility
to specify other parameters. Minimally, the New dialog provides a selection list for proxy point type, and
also the number of points to add (Figure 1-52).
In some driver networks, the New dialog may provide other parameters, such as a starting address range
when adding multiple proxy points. When you click OK, the number and type of proxy points you
specified are added under Points or a point folder. You need to edit specific properties for each new proxy
point (note these are typically properties of its proxy extension).
Point Edit
In the Point Manager view, you can Edit any proxy point shown in the station database by simply double-
clicking it. (Edit does not apply to point folders.)
Edit
The Edit dialog appears with the proxy point listed (Figure 1-53).
NiagaraAX-3.7u1
1–39
Drivers Guide
About the Point Manager Chapter 1 – Driver architecture
Point Edit June 5, 2013
Note: The Edit dialog for a proxy point shows mostly properties under its proxy extension, plus (typically) the
parent point’s Name and Facets. Many of the proxy extension values are required for communications, and
will vary among drivers. To access all properties of the proxy point, including all those under any of its
extensions, go to its property sheet.
When a single point is selected in the Edit dialog, you can edit any property except Type (fixed when you
added the point). Included is the ability to edit the Name of the proxy point in the station. This is equivalent
to the right-click Rename command on the point.
The following related topics also apply:
• Proxy point “gang” edits
• Manager table features
Proxy point “gang” edits
The Edit button in the Point Manager allows you to edit one or more proxy points in the station in a single
dialog. Before clicking Edit, use standard Windows keyboard controls to highlight (select) multiple points
(e.g. hold down Ctrl key and click desired points).
Note: Edit is also on the Point Manager toolbar and the Manager menu, if any point is selected.
The “gang” edit feature is useful for making the same change in multiple (selected) proxy points. For
example, as shown in Figure 1-54, you can change “Tuning” (point’s associated tuning policy) in multiple
points at the same time (versus editing this individually).
NiagaraAX-3.7u1
1–40
Drivers Guide
Chapter 1 – Driver architecture About the Point Manager
June 5, 2013 About Point Discover, Add and Match (Learn Process)
When you have the Edit dialog open with multiple points, you can also click on a specific one to make
individual edits (e.g. Name, or any other editable property), during the same dialog session where you
made “gang” property edits. When you click OK, all the changes are applied to the proxy points as made
during your Edit session. For more details, see “About gang edits.”
Note: When you have multiple points selected in the Edit dialog, properties that must be unique (such as Name)
are automatically unavailable (dimmed). However, note that some properties that typically should be
unique (often address properties) may still be available for “gang edit,” as this rule is not automatically
enforced. Typically, when editing these properties, you should verify only a single point component (row) is
highlighted in the table.
NiagaraAX-3.7u1
1–41
Drivers Guide
About the Point Manager Chapter 1 – Driver architecture
About Point Discover, Add and Match (Learn Process) June 5, 2013
Note: Under a NiagaraNetwork (only), a Niagara Point Manager discover produces an intermediate dialog: the
Bql Query Builder. You use it to browse the remote station and specify what items to select from as
discovered proxy point candidates. For details, see “About the Bql Query Builder” on page 2-20.
In Learn Mode, the two panes in the Point Manager operate as follows:
• Discovered (top pane)
Lists data items discovered on the driver’s network, as proxy point candidates. For any data item
found that already exists in the station database, it will appear “ghosted” (listed faintly). Note items
listed may be “expandable”—see “Discovered selection notes” on page 1-42.
A job “progress bar” is also included on top of the Discovered pane.
Note: A Cancel button is available during a discovery job. If needed, use it to stop discovery.
• Database (bottom pane)
Lists proxy points and point folders that are currently in the station database.
Note: As necessary, drag the border between the two panes to resize. Also (at any time), toggle between the two-
pane Learn Mode and the single-pane (Database) view by clicking the Learn Mode tool in the toolbar
(Figure 1-24 on page 18), or using the Learn Mode command in the Manager menu.
Discovered selection notes Often, data items listed in the Point Manager’s discovered pane are
expandable, having one or more related items, each individually selectable. Expandable items are
indicated by a leading plus (+), which you click to expand (a toggle control).
Figure 1-56 shows an example item (Lonworks nvoUnitStatus) expanded to reveal individual numeric-
type elements, each selectable as a separate proxy point.
NiagaraAX-3.7u1
1–42
Drivers Guide
Chapter 1 – Driver architecture About the Point Manager
June 5, 2013 About Point Discover, Add and Match (Learn Process)
Figure 1-56 Expand discovered data items to see all point candidates
Here, if you selected only the top “mode” element to add, you would have one proxy EnumPoint to
monitor the enumerated unit status (auto, heat, cool, etc), but would not have any of the related numeric-
type items proxied as control points.
Depending on the driver/device type, expandable discovered items represent individual properties or
other “structured” data. Some examples:
• BacnetDevice—Each top item is typically the “present value” property of the BACnet object (most
commonly selected). Expand the item to see other properties of the object.
• NiagaraStation—Using Bql Query Filter defaults (Config, control, ControlPoint), each top item
is equivalent to the “Out” slot of the Niagara component (and most commonly selected). Expand the
item to see other slots in the component (including Out).
• LonDevice—Each top item is typically the first element (field) in a structured SNVT (multi-field data
structure), as used in that particular network variable (nv or nci). To access other data fields in the
SNVT’s structure, you must expand that item.
For specific details, refer to the “Point discovery notes” section in a particular driver document.
Add
The Point Manager's Add button is available in Learn Mode when you have one or more data items
selected (highlighted) in the top discovered pane. When you click Add, an Add dialog appears that allows
you to edit properties before the proxy point(s) are created in the station (Figure 1-57).
Note: Whenever you select one or more items in the top discovered pane, the toolbar also has an available Add
tool (“plus” symbol), and the Manager menu has an Add command. Also, you can simply double-click a
discovered item to bring it up in the Add dialog.
NiagaraAX-3.7u1
1–43
Drivers Guide
About the Point Manager Chapter 1 – Driver architecture
About Point Discover, Add and Match (Learn Process) June 5, 2013
The Add dialog is nearly identical to the point Edit dialog, but allows you to edit Type as well as other
properties.
Often, you may wish to change Type from the pre-selected one, at least between read-only points and the
equivalent writable control point within that data category. For example, if adding a proxy point for the
present value (default) property for a BACnet Binary Output object, you may wish it to be a read-only
BooleanPoint point rather than the default BooleanWritable. As shown in Figure 1-58, you can do this in
the Add dialog before it is added to the station database, (but not later using the point Edit feature).
Note: In most cases, alternate point Types include StringPoint, and possibly others. Generally speaking, there are
few practical applications in changing the data category of a proxy point type (e.g. from Boolean to Enum
or Sting), however, this may be an option. Note that if working under a NiagaraNetwork, only read-only
proxy points are available.
Address-related properties in the Add point dialog already have acceptable values for operation
(otherwise, the data item would not have been discovered). It is possible you change only Name and
possibly Type, unless you know other settings you wish to change now. You can always Edit these
properties in the proxy point(s) after you click OK and add them to your station.
Match
Match is a feature that may be useful when you have an application with proxy points you wish to reuse,
or if you have programmed offline using the New point feature.
NiagaraAX-3.7u1
1–44
Drivers Guide
Chapter 1 – Driver architecture About the Point Manager
June 5, 2013 About other Points views
• In the first case (application for reuse), you could have some number of proxy points included in an
application that you have saved and now recopied under the target Points container. Often, address-
related properties in the copied proxy points are incorrect. However, you can use the Point Manag-
er’s Learn Mode and step through each proxy point in the copied application, and use the Match fea-
ture to “sync” with the intended (and discovered) data item.
• In the second case (offline programming) where a connection to the actual device network is un-
available, you can manually add New devices and New proxy points, and begin station engineering
of a driver network. Typically, most component creations under a driver network are possible (in-
cluding all levels) using the New command in the various “manager” views (Device Manager, Point
Manager, other device extension managers). Or, you can add saved applications (from the device lev-
el on down) and edit as necessary. Then, when online with the driver network later, you could use
Match to “sync” to existing components (device-level, proxy points, and so forth).
The Match button in the Point Manager becomes available when in Learn Mode, and you have:
1. Selected one point candidate in the top (Discovered) pane.
2. Selected one existing proxy point in the bottom (Database) pane.
Note: In this case, the toolbar also has an available Match tool, and the Manager menu has a Match command.
When you click Match, the Match dialog appears, as shown in Figure 1-59.
Note: Match is strictly a “one-to-one” function for “discovered-to-database”—note that it is unavailable any time
you have multiple items selected either in either the top Discovered pane or bottom Database pane.
The Match point dialog is nearly identical to the single-point Edit dialog, and typically provides the
“discovered” point address values. Often, most properties in the Match dialog have acceptable address
values required for operation (otherwise, the item would not have been discovered). You can always Edit
the proxy point after you click OK and add it to your station.
NiagaraAX-3.7u1
1–45
Drivers Guide
About Histories extension views Chapter 1 – Driver architecture
History Import Manager June 5, 2013
• Property sheet
Also includes all proxy points, plus any kitControl and schedule components, simple control points,
and so on. As needed, you can use this view to expand down to any level to access and edit properties.
For example, you can access an alarm extension under a proxy point.
• Slot sheet
Also includes all proxy points, plus any kitControl and schedule components, simple control points,
and so on. As needed, you can use this view to edit config flags from defaults (say, for security
schemes), edit config facets, and add name maps.
• Px view (New View)
(Optional) A custom graphical view that you define by creating a new Px file or using an existing Px
file. When created, this view becomes the default view for the device’s Points extension (or if created
for a points folder, its default view).
You use this view to create, edit, and delete history import descriptors. Each import descriptor you add
results in the creation of a local Niagara history—regardless if the source log data is a BACnet Trend Log,
Niagara history, or other type of data log (depending on driver type, and parent device’s component type).
This view provides a status summary for collecting imported histories. You can also use it to issue manual
“Archive” commands to one or more history descriptors. This causes an immediate import request to pull
logged data from the remote device.
Note: Only history import descriptors appear in the History Import Manager view—any other components that
may also reside under Histories do not appear. For example, you do not see the default “Retry Trigger”
component (see “About the Retry Trigger” on page 1-34). However, you can use the Histories property sheet
to access these items.
At the bottom of the view, the “New” button lets you manually create new import descriptors in the
station. An “Edit” button lets you edit one or more import descriptors. Buttons “Discover,” “Add” and
“Match” are also available, (these work similarly as in the Point Manager). An “Archive” button is
available to manually import (pull data) into one or more selected histories.
NiagaraAX-3.7u1
1–46
Drivers Guide
Chapter 1 – Driver architecture About Histories extension views
June 5, 2013 History Import Manager
Starting in AX-3.5, the History Import Manager in some drivers provides a “New Folder”
button, to add folders to organize history import (and export) descriptors. Each such folder provides its
own set of history manager views. See the section “Niagara History Import Manager” on page 2-29 for
details on this, as well as other history import information specific to the niagaraDriver.
For more common details about a History Import Manager, see:
• History Import New
• History Import Edit
• About History Import Discover, Add and Match (Learn Process)
History Import New
Button New exists in a device’s History Import view, but is typically used only if:
•Using “system tags” to import remote histories, such as in the Niagara History Import Man-
ager, versus online discovery.
• Engineering offline—most devices with a Histories extension support online discovery (a File-
Device is one exception). If offline, Match may be used later (when online with the device).
Note: A “New” tool is also on the History Import Manager toolbar, and in the Manager menu.
History Import Edit
In the History Import Manager, you can Edit any import descriptor in the station database by simply
double-clicking it.
Edit The Edit dialog appears with the import descriptor listed (Figure 1-61).
Figure 1-61 Edit dialog in Bacnet History Import Manager (single history)
This dialog shows configuration properties of the history import descriptor, plus Name (equivalent to the
right-click Rename command on the descriptor). To access all properties, (including all status properties)
go to its property sheet.
For related property details, see “BACnet Trend Log notes” in the BACnet Guide, or if a Niagara History
Import descriptor, see “Niagara History Import properties” on page 2-30.
The following related topics also apply:
• History descriptor “gang” edits
• “Manager table features” on page 1-21
History descriptor “gang” edits The Edit button in the History Import (or Export) Manager allows you
to edit one or more descriptors in a single dialog. Before clicking Edit, use standard Windows keyboard
controls to highlight (select) multiple descriptors (e.g. hold down Ctrl key and click desired descriptors).
Note: Edit is also on the History Import (or Export) Manager toolbar and the Manager menu, if any descriptor
is selected.
The “gang” edit feature is useful for making identical changes in multiple (selected) descriptors. For
example, you can change “Execution Time” (when data is pulled into imported history) in multiple
descriptors at the same time (versus editing individually).
NiagaraAX-3.7u1
1–47
Drivers Guide
About Histories extension views Chapter 1 – Driver architecture
History Import Manager June 5, 2013
When you have the Edit dialog open with multiple descriptors, you can also click on select ones to make
individual edits (e.g. Execution, Time of Day), during the same dialog session where you made “gang”
edits. When you click OK, all the changes are applied to the descriptors as made during your Edit session.
Note: When you have multiple descriptors selected in the Edit dialog, properties that currently have different
values are automatically unavailable (dimmed). Only a property that currently has the same value (across
all selected descriptors) is available for gang edit.
About History Import Discover, Add and Match (Learn Process)
Unless working offline, you can use the learn process to import histories in the station. As with other
NiagaraAX learns, this is a two-step process in the History Import Manager, where you:
1.
Under a selected device component, use its (Histories extension) Histories Import Manager view to
Discover log data (histories) as candidates for inclusion in the station as histories.
2. Select and Add from those histories, creating history descriptors under the device’s Histories
container.
Note: The Histories Import Manager reinforces this process by providing two separate panes in the view whenever
you enter “Learn Mode.” See “About Learn toggle” on page 1-18.
Discover When you click Discover, the Histories Import Manager splits into two panes (Learn Mode):
discovered items in the top pane, and existing import descriptors bottom pane (Figure 1-62).
Note: If under a BacnetDevice, a “Bacnet Trend Logs Discover” job is started, with a progress bar at top. If
BACnet Trend Log objects are found, they are listed in the discovered pane.
Figure 1-62 Discover splits History Import Manager (Bacnet History Import Manager shown)
Note: Under a NiagaraNetwork (only), the discovered pane shows the collapsed tree structure of all Niagara
histories of that selected NiagaraStation. Click to expand and select histories for import. See “Discovered
selection notes” on page 2-30 for more details.
In Learn Mode, the two panes in the Histories Import Manager operate as follows:
• Discovered (top pane)
Lists histories (Niagara) or Trend Logs (Bacnet) found in the device, as history descriptor candidates.
Any item that already exists as a history in the station is “ghosted” (faintly listed).
If a BacnetDevice, a job “progress bar” is also included on top of the Discovered pane.
Note: A Cancel button is available during a discovery job. If needed, use it to stop discovery.
• Database (bottom pane)
Lists history descriptors currently in the station database (each has an associated history).
Note: As necessary, drag the border between the two panes to resize. Also (at any time), toggle between the two-
pane Learn Mode and the single-pane (Database) view by clicking the Learn Mode tool in the toolbar
(Figure 1-24 on page 18), or using the Learn Mode command in the Manager menu.
Add The Add button is available in Learn Mode when you have one or more items selected (highlighted)
in the top discovered pane. When you click Add, a dialog appears that allows you to edit properties before
the history descriptor(s) are created in the station.
Note: Whenever you select one or more items in the top discovered pane, the toolbar also has an available Add
tool (“plus” symbol), and the Manager menu has an Add command. Also, you can simply double-click a
discovered item to bring it up in the Add dialog.
NiagaraAX-3.7u1
1–48
Drivers Guide
Chapter 1 – Driver architecture About Schedules extension views
June 5, 2013 About the Device Histories View
The Add dialog is identical to the history import descriptor Edit dialog. For details on properties of a
Niagara HistoryImport descriptor, see “Niagara History Import properties” on page 2-30.
Match Match, as an online function, is available if you have one history selected in the top (discovered)
pane and one history import descriptor selected in the bottom (database) pane. However, usage of Match
when importing histories from a device (NiagaraStation, BacnetDevice) is generally not recommended.
Instead, use the Discover and Add method to import histories.
Figure 1-63 Device Histories View on the Histories (Trend Logs) extension of a BacnetDevice component
This view displays a filtered list of “history shortcuts” for histories imported or exported from this device,
where shortcuts are automatically created by the view. If you double-click on a shortcut in the view, the
default view of that local Niagara history displays. Right-click for a menu to select other history views.
Note: The shortcut icon in the Device Histories view is a visual reminder that the view is displaying a shortcut,
not the actual history.
In addition to this automatically-populated “convenience view”, you can add other history shortcuts
anywhere under a device component, using a History Shortcuts component copied from the
histories palette. This may also be useful for imported/exported histories of a NiagaraStation, when
engineering a Supervisor station. For related details, see “About history nav shortcuts” in the User Guide.
NiagaraAX-3.7u1
1–49
Drivers Guide
Schedule Import Manager Chapter 1 – Driver architecture
Schedule Import New June 5, 2013
When building a network in the station, you use this view to create, edit, and delete imported Niagara
schedules. In the case of a Niagara network (only), each schedule that you import results in the creation
of a remote “schedule export descriptor” in that remote Niagara station.
Following station configuration, this view provides a status summary for collecting imported schedules.
You can also use it to issue manual “Import” commands to one or more schedules. This causes an
immediate import request to pull schedule configuration data from the remote device.
Note: Only imported schedules appear in the Schedule Import Manager—any other components that may also
reside under Schedules do not appear. For example, you do not see the default “Retry Trigger” component
(see “About the Retry Trigger” on page 1-34), or if a NiagaraStation, schedule export descriptors. However,
the Nav tree and other views on Schedules provide you access to these items.
At the bottom of the view, the button “New” lets you manually create new imported schedules in the
station. An “Edit” button lets you edit a few properties of one or more imported schedules. Buttons
“Discover,” “Add” and “Match” are also available, (these work similarly as in the Point Manager). An
“Import” button is available to manually import (pull data) into one or more selected imported schedules.
Finally, depending on driver, additional buttons may be available.
For more details, see:
• Schedule Import New
• Schedule Import Edit
• About Schedule Import Discover, Add and Match (Learn Process)
NiagaraAX-3.7u1
1–50
Drivers Guide
Chapter 1 – Driver architecture Schedule Import Manager
June 5, 2013 Schedule Import properties
Note: The Edit dialog shows some properties of the schedule’s ScheduleImportExt, plus Name—equivalent to the
right-click Rename command on the parent schedule component). To access all properties of the schedule
(including all status properties) go to its property sheet, or to see the imported schedule events, go to its
Scheduler view. Or, in the Nav tree you can simply double-click the schedule for its Scheduler view.
The following related topics also apply:
• Schedule Import properties
• Schedule Import or Export “gang” edits
• Manager table features
NiagaraAX-3.7u1
1–51
Drivers Guide
Schedule Import Manager Chapter 1 – Driver architecture
Schedule Import or Export “gang” edits June 5, 2013
• Last Trigger
Timestamp of when last interval or daily import occurred.
• Next Trigger
Timestamp of when the next interval or daily import is configured to occur.
Figure 1-66 Example “gang edit” in Edit dialog of Schedule Export Manager
NiagaraAX-3.7u1
1–52
Drivers Guide
Chapter 1 – Driver architecture Schedule Export Manager
June 5, 2013 About Schedule Import Discover, Add and Match (Learn Process)
In Learn Mode, the two panes in the Schedule Import Manager operate as follows:
• Discovered (top pane)
Lists schedule components (Niagara) or Schedule and/or Calendar objects (Bacnet) found in the de-
vice, as candidates for imported schedules. Any item that already exists as a schedule in the station
is “ghosted” (faintly listed).
• Database (bottom pane)
Lists schedules currently imported in the station database (contained in Schedules container).
Note: As necessary, drag the border between the two panes to resize. Also (at any time), toggle between the two-
pane Learn Mode and the single-pane (Database) view by clicking the Learn Mode tool in the toolbar
(Figure 1-24 on page 18), or using the Learn Mode command in the Manager menu.
Add
The Add button is available in Learn Mode when you have one or more items selected (highlighted) in
the top discovered pane. When you click Add, a dialog allows you to edit properties before the schedule
is created in the station. The Add dialog and Edit dialog are identical.
Note: Whenever you select one or more items in the top discovered pane, the toolbar also has an available Add
tool (“plus” symbol), and the Manager menu has an Add command. Also, you can simply double-click a
discovered item to bring it up in the Add dialog.
For details on properties, see “Schedule Import properties” on page 1-51.
Match
Match, as an online function, is available if you have one schedule selected in the top (discovered) pane
and one schedule selected in the bottom (database) pane. However, usage of Match when importing
schedules from a device (NiagaraStation, BacnetDevice) is generally not recommended. Instead, use the
Discover and Add method to import schedules.
NiagaraAX-3.7u1
1–53
Drivers Guide
Schedule Export Manager Chapter 1 – Driver architecture
About Schedule Import Discover, Add and Match (Learn Process) June 5, 2013
NiagaraAX-3.7u1
1–54
Drivers Guide
CHAPTER 2
Niagara Network
NiagaraAX-3.7u1
2–1
Drivers Guide
NiagaraNetwork component notes Chapter 2 – Niagara Network
Niagara Tuning Policy notes June 5, 2013
• Fox Service
A container for Fox protocol settings affecting client connections made to the local station, such as
from Workbench or from another station. Such connections appear locally as “server connections.”
For details, see “About the Fox Service” on page 2-2.
• History Policies
A container for “rules” that specify how remotely-generated histories should be changed when these
histories are pushed into the station (that is, exported from remote stations). Also contains a poll
scheduler for “on demand” polling of histories. For details, see “About History Policies” on page 2-6.
• Workers
A container with a configurable property “Max Threads”. This property allows tuning for large Nia-
gara networks (many NiagaraStation components), and is the only visible part of a “shared thread
pool scheme” for large-job scalablilty. In current releases, station work from a single station was lim-
ited to one thread, and the former default value of 50 was changed to a “max” value. This allows the
local station’s thread pool to grow uncapped while alleviating a former thread starving issue.
In addition, tuning policies are simplified in a Niagara Network, as compared to “polling type” drivers.
See the next “Niagara Tuning Policy notes” for more information.
NiagaraAX-3.7u1
2–2
Drivers Guide
Chapter 2 – Niagara Network NiagaraNetwork component notes
June 5, 2013 About the Fox Service
Included are properties for the TCP port number assigned to the Fox server, authentication method used,
and various timeout/trace settings. If the host platform is configured for SSL (recommended if AX-3.7 or
later), ofen you change a few related properties from defaults. See Fox Service properties for more details.
Authentication is required when establishing any Fox connection to/from the station:
• If opening a station in Workbench, you must enter a valid station username and password for it in
the station login dialog (otherwise it does not open).
• If accessing a station in a browser as a user, where you also must enter valid user credentials (log in).
• If adding a NiagaraStation to a station’s NiagaraNetwork, you must configure username and pass-
word properties under its Client Connection slot (otherwise it remains “down”). Often, you enter the
username and password of a specific “service-type” user account in that station. You also specify the
software port used by that station’s Fox server.
Note: Often in a multi-station job, in each station you create a user specifically for station-to-station
communications, typically with “admin write” privileges. This is the “service-type” account that you
reference when you edit a NiagaraStation’s Client Connection properties, entering its username and
password. For related details, refer to the section “Multi-station security notes” in the User Guide.
The Fox Service also has a Server Connections container slot with a default ServerConnectionsSummary
view. Client connections to the station’s Fox server are dynamically modeled as “SessionN” child compo-
nents. The summary view allows you to see all current connections, and if necessary, perform a right-
click Force Disconnect action on one or more connected users.
Fox Service properties
Figure 2-2 shows the Fox Service expanded in the NiagaraNetwork property sheet of an AX-3.7u1 station.
NiagaraAX-3.7u1
2–3
Drivers Guide
NiagaraNetwork component notes Chapter 2 – Niagara Network
About the Fox Service June 5, 2013
• Foxs Enabled
Default is false. Must be true for Foxs communications to occur on the port above.
• Foxs Only
Default is false. If true, and Fox Enabled and Foxs Enabled are both true, Fox connection attempts are
redirected as Foxs connections. Such a configuration is different from setting Fox Enabled false and
Foxs true. In that case, only Foxs connection attempts work; Fox connection attempts are ignored.
Note: If station communications are needed between any other NiagaraAX host that is not
configured or capable of AX-3.7 or later SSL (such as a JACE-2), leave this at false (default).
• Foxs Min Protocol
Specifies which standard protocol the Foxs server supports for client connections, where the default
is SSLv3+TLSv1 (both, the default). Other choices are SSLv3 (only) or TLSv1 (only).
• Foxs Certificate
The “alias” for the server certificate in the host platform’s “key store” to use for any Foxs connection.
The default is the “tridium” self-signed certificate, which is automatically created when SSL is first
loaded (by presence of certain modules and proper host licensing). If other certificates are in the host
platform’s key store, you can use the drop-down control to select another one instead.
• Authentication Policy
These policy options apply to how password authentication is handled by the default NiagaraAX
User Service or any other service when establishing a client connection using the Fox protocol.
Property choices (when using AX-3.7u1) are as follows:
• Digest
This option uses password hashing. This is the recommended setting when using the default
User Service.
• Basic
This option does not use encryption, therefor the user password is passed in clear text. Typical-
ly, Basic is used only if using LDAP.
Note: Prior to AX-3.7u1 (in AX-3.7), the “Digest” choice was “Digest MD5”, which equates to “legacy
encryption”. This was strengthened to SCRAM-SHA (Salted Challenge Response Authentication
Mechanism) in AX-3.7u1 and later. Any station upgraded from AX-3.7 will automatically upgrade
from “Digest MD5” to “Digest”. This also relates to a new FoxService property, “Legacy Authenti-
cation”, described below.
Additionally, a third “Transactional” choice for Authentication Policy (available in prior AX-3.7) is
no longer available, as products that were designed to use it were discontinued.
• Legacy Authentication
(AX-3.7u1 or later only) Applies to password authentication when establishing a connection with a
remote Fox client (either another station or Workbench). Choices are as follows:
• Strict
(Recommended, and default) Remote client must be able to handle passwords using SCRAM-
SHA, requiring it to be running AX-3.7u1 or later. Clients running earlier “pre-2013” Niagar-
aAX releases are considered “legacy clients”, and will be unable to connect.
• Relaxed Until (date)
Allows a legacy client to connect to this station using Basic authentication, but only until the
specified date. After that date, this setting behaves the same as if set to Strict.
• Always Relaxed
Always allows a legacy client to connect to this station using Basic authentication. Be especially
careful if setting to this value—see the Note and Caution below!
Note: Both “Relaxed” choices are intended for temporary usage only during a system upgrade to one
of the “security update” releases. For example, after upgrading a Supervisor, you could set its Legacy
Authentication to relaxed only until all its subordinate JACEs have also been upgraded. This would
allow those stations to continue to communicate to the Supervisor until upgraded. Strict is the most
secure option. After a system-wide upgrade, set this to Strict in the FoxService of all stations!
For related details, refer to “Upgrade considerations” in NiagaraAX 2013 Security Updates.
Caution Unless set to “Strict”, either “Relaxed” option is effectively the same as setting the FoxService property
Authentication Policy to “Basic” for any legacy connection attempt, as this is the only “common ground”
authentication between a legacy Fox client (Workbench or another station) and this update release station.
Thus, passwords for client connections to this station are passed in clear text (if a legacy client). Obviously,
this is not a desired operating state. For this reason, you should seldom (if ever) set the Legacy Authentication
property to “Always Relaxed”.
NiagaraAX-3.7u1
2–4
Drivers Guide
Chapter 2 – Niagara Network NiagaraNetwork component notes
June 5, 2013 About the Fox Service
• Request Timeout
Time to wait for a response before assuming a connection is dead (default is 1minute).
• Socket Option Timeout
Time to wait on a socket read before assuming the connection is dead (default is 1minute).
• Socket Tcp No Delay
Used to disable “Nagle’s algorithm”, found to cause issues with delayed acknowledgements that oc-
curred in TCP socket communications between Fox clients and servers. The default (and typically
recommended) value is true, disabling Nagle’s algorithm.
Note: On the Workbench side, there is a line in the system.properties file you can enter to
adjust this setting: “niagara.fox.tcpNoDelay=true”.
• Keep Alive Interval
Interval between keep alive messages (default is 5 seconds). The keep alive should be well below the
request timeout and socket option timeout.
• Max Server Sessions
Maximum number of Fox/Foxs server connections before client connections error with busy. De-
fault is 100.
• Multicast Enabled
Default is true. Allows UDP multicasting initiated by the station, necessary for a discovery from this
station. Note this is different than Workbench UDP mulitcast support, which can be optionally dis-
abled via an entry in the Workbench host’s system.properties file.
• Enable Announcement
Either true (default) or false. Enables support of UDP multicast announcement messages received by
the station, in support of learn/discover.
• Multicast Time To Live
Number of hops to make before a multicast message expires (default is 4).
• Server Connections
Provides status information about current Workbench client connections to the local station (does
not reflect station-to-station Fox connections).
• Trace Session States
Either true or false (default). Debug usage for tracing session state changes.
• Trace Read Frame
Either true or false (default). Debug usage for dumping frames being read from the wire.
• Trace Write Frame
Either true or false (default). Debug usage for dumping frames being written to the wire.
• Trace Multicast
Either true or false (default). Debug usage for tracing multicast messaging.
• Tunneling Enabled
If enabled, allows the station to perform as a Fox “tunnel” (i.e. proxy server) through which Work-
bench sessions can be established to other Niagara hosts, typically unreachable otherwise. A special
tunnel ORD syntax is used from the client Workbench to reach target stations through the Fox tun-
nel. Usage requires the proxy server (typically the Supervisor host) to be licensed for Fox tunneling.
For more details, see “About Fox Tunneling” in the engineering notes document Fox Tunneling and
HTTP Tunneling. Note this property does not affect platform tunneling, introduced in AX-3.5.
• Only Tunnel Known Stations
An added security option that affects Fox tunneling, HTTP tunneling, and platform tunneling. Ap-
plies only if the station is configured as a tunnel (proxy server).
• If left at the default (false) value, a tunnel connection is permitted to any target IP address given
in the tunnel ORD, or in the case of platform tunneling, to any “Tunnel to” IP address entered
in the Workbench “Open Platform” dialog.
• If set to true, Fox, HTTP, and platform tunneling is permitted only to a station that exists in the
proxy server’s NiagaraNetwork, where tunnel ORD (Fox) or URL (HTTP) syntax uses the target
station’s name instead of IP address. For platform tunneling, the target “Tunnel to” in the
Workbench “Open Platform” dialog also requires the station name.
For example, going through 10.10.8.9 to reach target station “demoStation” at 10.10.5.4
– instead of (Fox) tunnel at ip:10.10.8.9|fox:/10.10.5.4
– now (Fox) tunnel at ip:10.10.8.9|fox:/demoStation
or
– instead of (HTTP) tunnel at http://10.10.8.9/tunnel/10.10.5.4
– now (HTTP) tunnel at http://10.10.8.9/tunnel/demoStation
or
– instead of platform tunnel at “Host” = 10.10.8.9, “Tunnel to” = 10.10.5.4
NiagaraAX-3.7u1
2–5
Drivers Guide
NiagaraNetwork component notes Chapter 2 – Niagara Network
About History Policies June 5, 2013
NiagaraAX-3.7u1
2–6
Drivers Guide
Chapter 2 – Niagara Network NiagaraNetwork component notes
June 5, 2013 About History Policies
• fullPolicy
Applies if capacity is not “unlimited,” and specifies if collection continues when the capacity record
limit is reached (“roll”) or collection stops (“stop”).
As shown in Figure 2-3, the Supervisor’s History Policies “Config Rules” determine the capacity and
fullPolicy settings for each history upon export from the JACE stations “WestWing403” and
“EastWing403”. Rules are matched to remote station (device) names and history names, which determine
the corresponding capacity and full policy values to apply upon history creation. If a history does not
match any rule, it is created using the same capacity and full policy as in the source history. For an
example related to this scenario, see “Example config rules”.
Example config rules
As shown in the three-station scenario of Figure 2-3, histories in two JACE stations are exported to the
“Supervisor” station. As shown in Figure 2-4, the receiving supervisor station has 2 additional config
rules under its History Policies of its NiagaraNetwork.
NiagaraAX-3.7u1
2–7
Drivers Guide
Niagara Station Manager notes Chapter 2 – Niagara Network
About History Policies June 5, 2013
In this example, the highest-priority “Default Rule” matches all (any) stations exporting, with a history
name pattern of “Audit*”—this matches any “AuditHistoryLog”. Capacity is set to unlimited, as all
history records are wanted for archive.
The next two rules are applied to histories exported (by any stations), as follows:
• MetersRule — This rule applies to any station, for any history named beginning with “Meter”
(“Meter*”). Any such history is archived using unlimited capacity, as all records are wanted.
• EastWins — This rule applies only to stations named beginning with “East” (“East*”), for any histo-
ry. Such a history is archived using a capacity of 100,000 and a “roll” full policy.
Following this example (with exported histories in the JACE stations, as shown in Figure 2-3), the
histories are created in station “Supervisor” as follows:
• WestWing403
Histories are exported using following capacity and fullPolicy (from config rule):
• AuditHistoryLog: unlimited, roll (from Default Rule)
• LogHistory: 500 records, roll (no rule matched, using source history config, as in JACE)
• Meter1: unlimited, roll (from “MetersRule”)
• ZoneTemp1: 500 records, roll (no rule matched, using source history config, as in JACE)
• Meter2: unlimited, roll (from “Meters Rule”)
• ZoneTemp2: 500 records, roll (no rule matched, using source history config, as in JACE)
• EastWing403
Histories are exported using following capacity and fullPolicy (from config rule):
• AuditHistoryLog: unlimited, roll (from “Default Rule”)
• LogHistory: 100,000 records, roll (from “EastWins” rule)
• Meter4: unlimited, roll (from MetersRule)
• ZoneTemp1: 100,000 records, roll (from “EastWins” rule)
• Meter3: unlimited, roll (from Meters Rule)
• ZoneTemp2: 100,000 records, roll (from “EastWins” rule)
Again, note that the rules are processed in priority order. In this example, if the two optional rules were
reordered (“EastWins” above “MeterRule”) before the JACE histories were exported, results would differ.
“Meter” histories from EastWing403 would have a 100,000 record capacity instead.
NiagaraAX-3.7u1
2–8
Drivers Guide
Chapter 2 – Niagara Network Niagara Station Manager notes
June 5, 2013 Station Learn and Discover notes
As shown above, such discovered stations show a icon by Station Name and show “foxs” for Scheme.
Note that it is possible for a station to be listed twice—once with this “foxs” scheme and another with the
regular “fox” scheme (and icon without padlock). However, typically you choose to add the “foxs” one.
When you double-click such a station for the Add dialog (or click to select, then click the Add button)
the Add dialog will show the port used by that station for Foxs, for example 4911 (Figure 2-6).
NiagaraAX-3.7u1
2–9
Drivers Guide
Niagara Station Manager notes Chapter 2 – Niagara Network
Station Learn and Discover notes June 5, 2013
Figure 2-6 Add dialog for NiagaraStation using Foxs (secure Fox)
In addition, the “Use Foxs” property is true by default. After entering the station username and password
(used for client connection to it), and clicking OK, the station is added to the database pane in the
Station Manager view. The status of the NiagaraStation is ok, providing a “trusted root” or “allowed
host exception” is in the local station’s platform (for validation of that host’s server certificate).
Figure 2-7 NiagaraStation added for station using Foxs (secure Fox)
Again, such a station shows a icon in the database pane. Note that some jobs may include a mix of
JACE types, some of which can support AX-3.7 SSL (JACE-6 series, JACE-7 series, any Win32-based
JACE models), while other JACE types do not (JACE-2 series, or JACE-4 series, JACE-5 series). Stations
running on the latter types lack the ability for a Foxs connection.
Note: When configuring any AX-3.7 station using SSL (for example a Supervisor station) that requires station
communications between a JACE that is not configured for (or does not support) 3.7 SSL, you must leave
its FoxService property “Foxs Only” at the default false value. Otherwise, the remote JACE will be unable
to make a client connection back to the station. For related details, see “Fox Service properties” on page 2-3.
For more details on adding a discovered NiagaraStation, see “Station Add notes” on page 2-11.
IP address versus hostname
By default, a station discover uses IP address (vs. host name). Typically, this is the recommended method.
However, if Niagara hosts are installed on a network using DNS, you can toggle the discovery method to
use host name instead. Do this in the table options menu (click upper right small control) of the Station
Manager, as shown in Figure 2-8. You can also select other columns for display, and sort on any column.
See “Manager table features” on page 1-21 for more details.
NiagaraAX-3.7u1
2–10
Drivers Guide
Chapter 2 – Niagara Network Niagara Station Manager notes
June 5, 2013 Station Add notes
Depending on the chosen discovery method, discovered stations appear listed with either an IP address
or a DNS name in the Address column. By default, discovered stations list showing an IP address. If a
station uses a Fox port other than 1911 (default), address includes the port number appended, for
example: 192.168.1.112.1912.
Note: Starting in AX-3.7, a “Use Foxs” property is present when adding a NiagaraStation, by default set to false—
say if manually adding (Add button). In this case, set this to true only if the remote host is properly
configured for SSL, including the FoxService of the NiagaraNetwork on its station (you must know the port
it uses). If using the Discover feature in the Station Manager, any discovered stations using Foxs
appears differently, and addition is simplifed. See “Discovery notes when stations use secure Fox (Foxs)” on
page 2-9. For complete SSL details, refer to the NiagaraAX SSL Connectivity Guide.
Typically, you enter a user name and password for a specific “service account” user previously made in
the remote station. By recommendation, this should be a user that was created especially for station-to-
station access, typically with admin write privileges, and not otherwise used for normal (login) access of
that station. Additionally, this user should uniquely named for each project, and have a strong password.
Refer to the section “Multi-station security notes” in the User Guide for related details.
• The “Virtuals Enabled” property determines whether virtual components can be accessed in this sta-
tion, by any user with permissions on its “Niagara Virtual Gateway”. For related details, see “About
Niagara virtual components” on page 2-37 and “Security and Niagara virtuals” on page 2-44.
• When working in a Supervisor station configured for Niagara provisioning, additional “platform”
NiagaraAX-3.7u1
2–11
Drivers Guide
Niagara Station Manager notes Chapter 2 – Niagara Network
Station Add notes June 5, 2013
properties are in the Add (or Edit) dialog, to specify the platform daemon credentials and port the
provisioning code should use to connect to the remote JACE platform, to run provisioning jobs.
Figure 2-10 shows these properties near the bottom of the Add dialog.
Briefly, these “platform related” properties in the Add/Edit dialog are described as follows:
• Platform User — User account for the station’s platform, the same used as when making a plat-
form connection to it using Workbench.
• Platform Password — Password for this user account.
• Secure Platform — (new starting in AX-3.7) A Boolean that is false by default. Set to true only
if the JACE is currently configured to support a secure (SSL) platform connection.
• Platform Port — Port this JACE’s platform daemon monitors for a platform connection, with
the default value of 3011 (“standard” port for a “regular”, or unsecure platform connection). If
using a non-standard platform port, or a secure (SSL) platform connection, you edit this to
match that port number. Note the “standard” port for platform SSL is 5011, but this may be
configured differently in the JACE.
For more details, refer to the document NiagaraAX Provisioning Guide for Niagara Networks.
Note: Adding a NiagaraStation automatically creates a reciprocal NiagaraStation in the remote station. See the
section “Reciprocal NiagaraStation component”.
Reciprocal NiagaraStation component
When you add a station under the Niagara Network, that remote station automatically adds a “recip-
rocal” NiagaraStation component under its own Niagara Network, representing the original station.
However, its “Enabled” property is false, with a “disabled” status (looks grayed out as in Figure 2-11).
Figure 2-11 Example reciprocal NiagaraStation (representing Supervisor) created in JACE station
This provides a visual clue for you to edit its Client Connection properties Username and
Password to valid credentials for an appropriate user in the reciprocal station, and also to set the Enabled
property of the NiagaraStation from false to true (to allow operation).
Note: As shown in Figure 2-11, sometimes a reciprocal NiagaraStation may get added using “hostname” for
Address, rather than its preferred IP address. It is recommended that you edit the NiagaraStation’s Address
back to IP format, from the NiagaraStation’s property sheet. Otherwise, the station may remain “down”
after you enable it.
NiagaraAX-3.7u1
2–12
Drivers Guide
Chapter 2 – Niagara Network NiagaraStation component notes
June 5, 2013 About station status properties
Figure 2-12 shows the Client Connection container expanded in a NiagaraStation’s property sheet. Many
properties are status types, that contain either real-time or historical information.
Right-click actions on the Client Connection container let you Manual Connect or Manual
Disconnect to this remote station.
NiagaraAX-3.7u1
2–13
Drivers Guide
About the Users extension Chapter 2 – Niagara Network
About station “provisioning” extensions June 5, 2013
NiagaraAX-3.7u1
2–14
Drivers Guide
Chapter 2 – Niagara Network About the Users extension
June 5, 2013 Example Sync In and Out values
NiagaraAX-3.7u1
2–15
Drivers Guide
About the Users extension Chapter 2 – Niagara Network
About Users sync strategy June 5, 2013
Note it also possible to have a “multi-tier” network user sync setup, for example where there is a single
“master” Supervisor host and several “subordinate” Supervisor hosts, each with multiple subordinate
JACE stations. In the NiagaraNetwork configuration of each subordinate Supervisor station, it would be
enabled to “Sync In” to the single NiagaraStation that proxies the master Supervisor. In other NiagaraSt-
ations that proxy remote JACEs, their Users extension is enabled to “Sync Out”. In this regard, each subor-
dinate Supervisor station could be considered both “user receiving” and “user sending”.
MasterSup (“Master Supervisor” station):
NiagaraAX-3.7u1
2–16
Drivers Guide
Chapter 2 – Niagara Network About the User Sync Manager
June 5, 2013 Users device extension configuration notes
receiving stations that have a matching-named User Prototype will replicate that user, using the 3 “local
override” properties of that local prototype. In addition, this method allows greater variations for
defining different schemes for permissions, navigation, and Web profile types among groups of users.
However, you can simplify things by configuring sync strategy for “Use Default Prototype”. This will result
in more replicated network users created, as a user-receiving station will simply use its Default Prototype
for a received User (if it does not have a matching-named User Prototype).
See the section “Sync process description” on page 2-18 for details on how the user synchronization
feature works. For related details on the “UserService side” of network users, refer to the section “About
User prototypes” in the User Guide.
However, once a JACE is upgraded to a 2013 update level (typically AX-3.7u1 or later), network user
operation with an upgraded Supervisor station will continue as normal, including sync operation.
NiagaraAX-3.7u1
2–17
Drivers Guide
Niagara Discover enhancements Chapter 2 – Niagara Network
Sync process description June 5, 2013
For each NiagaraStation in the network, the User Sync Manager shows the current status and config-
uration property values of its Users device extension. The example above, with its configuration of “Sync
In Enabled” and “Sync Out Enabled” properties, is representative of a NiagaraNetwork in a Supervisor
station, where all child stations are subordinate (typically JACE) stations.
From this view you can select one or more rows (stations) to edit properties, as shown in Figure 2-17
below, or manually request a sync of selected stations.
Figure 2-17 Edit dialog for one or more selected station's Users extension
NiagaraAX-3.7u1
2–18
Drivers Guide
Chapter 2 – Niagara Network Niagara Point Manager notes
June 5, 2013 Niagara point Discover notes
tions.
• Branding restrictions, for example where “appliances” allow a station-to-station connection, but
prevent direct Workbench-to-remote station connection.
Note: In addition, this change lets you perform discovery of Niagara proxy points, histories, and schedules using
browser access, that is, “Web Workbench”. Prior to AX-3.5, an error message resulted if this was attempted.
Note the following about these discovery enhancements:
• No related changes are visible in the various manager views, such as Niagara Point Manager,
Niagara History Import Manager, Niagara Schedule Import Manager, and so on.
• Access to discoveries in the endpoint (target) station requires station login credentials. If that station
has an account with the same credentials as the account you used to open the current (Supervisor)
station, the discovery process starts immediately, without login.
If those login credentials fail, the Discover presents a login dialog. Again, you enter the login creden-
tials for that endpoint (typically JACE) station.
Figure 2-18 Discover in Niagara Point Manger uses Bql Query Builder
Component type filters are available in that dialog too—for details, see “About the Bql Query Builder” on
page 2-20.
Point Discover notes if Fox tunneling (AX-3.4 system)
Starting in AX-3.4, the Niagara Point Manager was enhanced with an option for point discovery
to a tunnel-accessible only station, using a “Tunnel Discovery Mode”. Special indication of this mode
being active is provided by a banner at the top of the Point Manager, as shown in Figure 2-19.
NiagaraAX-3.7u1
2–19
Drivers Guide
Niagara Point Manager notes Chapter 2 – Niagara Network
About the Bql Query Builder June 5, 2013
Note: This feature is not required when connecting to an AX-3.5 or later station and doing point discoveries (and
other discoveries). For related details, see “Niagara Discover enhancements” on page 2-18.
However, if working with an AX-3.4 station (even if using a more recent Workbench), you may need to use
this feature for Point discovery for stations accessible only via Fox tunneling. Otherwise, Workbench needs
a direct Fox connection for point discovery, that is, be on same LAN.
In this case, you can toggle into this “Tunnel Discovery Mode” to perform a Niagara point discovery.
You enable/disable this feature using a pull-down toggle option on the bottom Discover button, also
available on the Manager menu and toolbar.
Figure 2-20 Toggle Tunnel Discovery Mode using menu, toolbar, or Discover button pull-down
For related details, see “About Fox Tunneling” in the engineering notes document Fox Tunneling and
HTTP Tunneling.
Figure 2-21 Default Bql Query Builder from NiagaraStation point Discover
NiagaraAX-3.7u1
2–20
Drivers Guide
Chapter 2 – Niagara Network Niagara Point Manager notes
June 5, 2013 About the Bql Query Builder
As needed, expand Config to select the root component of interest and click OK (or simply double-click
the component of interest). The proxy point Find is now limited to that area of the station.
To change the type of component, click the “Of type” drop-down and make a selection (Figure 2-23).
NiagaraAX-3.7u1
2–21
Drivers Guide
Niagara Point Manager notes Chapter 2 – Niagara Network
About the Bql Query Builder June 5, 2013
Figure 2-24 Expand Match to see fields, use entry fields as needed
In the Figure 2-24 example above, the Match filter is set to: displayName, like, “Fan*”. This point
discover returns components (within the Find parameters) that are named beginning with “Fan”.
This match would include all components named “Fan”, “FanAhu1”, “Fan21”, “Fantastic”, and so on.
However, components named “FAN” or “FAN21” would not be included (case-sensitivity), nor would
components named “AhuFan” or “BFan” be included—no leading wildcard (“*”) was used.
Note: You can click the Match plus (“+”) icon multiple times to add multiple match lines, and configure each
match line differently, as needed. Click the minus “–” icon to remove a match line.
If you have multiple match lines, note the drop-down selection beside Match (“All”, “Any”) becomes
important, and works as follows:
• All — Works as “AND” logic, where a match must occur as specified on every match line.
• Any — Works as “OR” logic, where a match must occur as specified on any match line.
Saving a query allows you to recall it later to either use directly, or to modify further as a “starting point”
for another query. You can save as many Bql queries as you need. You can also edit a saved query
(meaning rename it or reorder it in your list of saved queries).
Note: Saved queries are unique to your Workbench instance, and not to any particular station.
To recall a saved query, click the Load saved query icon (folder), and make a selection, as shown in
Figure 2-26.
NiagaraAX-3.7u1
2–22
Drivers Guide
Chapter 2 – Niagara Network Niagara proxy point notes
June 5, 2013 About the Bql Query Builder
This loads that query into the Bql Query Builder dialog, where Find and Match entries automatically
change to reflect how that query was constructed.
To edit saved queries, click the Edit saved query icon (note pad). This produces a Save Query
dialog in which you can rename and/or reorder the queries in the list (Figure 2-27).
Note: If you are interested in the Baja Query Language (BQL), you can learn about queries within the Edit dialog.
Make the dialog window wider, and study the syntax of a saved queries. For detailed information about
queries, see “BQL” in the Niagara Developer Guide.
Figure 2-28 Niagara proxy point for writable point includes actions
NiagaraAX-3.7u1
2–23
Drivers Guide
Niagara proxy point notes Chapter 2 – Niagara Network
Best practices for Niagara proxy points June 5, 2013
Other concepts about Niagara proxy points are explained in the following sections:
• Best practices for Niagara proxy points
• Proxy of a proxy, other candidates
• Link control and Niagara proxy points
NiagaraAX-3.7u1
2–24
Drivers Guide
Chapter 2 – Niagara Network Niagara proxy point notes
June 5, 2013 Proxy of a proxy, other candidates
Figure 2-29 Local object cannot link into Niagara proxy point
In the example above, you cannot link into the Niagara proxy point “BFanBO_1.”
So in the other (remote) station, you must make a Niagara proxy point for the And object, then link its
output to the source point for “BFanBO_1.” Figure 2-30 shows the wire sheet views with these objects.
Figure 2-30 Source control object now a Niagara proxy point in remote station
NiagaraAX-3.7u1
2–25
Drivers Guide
NiagaraStation Alarms notes Chapter 2 – Niagara Network
Prepend and Append alarm routing notes June 5, 2013
In a typical Supervisor station (serving primarily Px views), usually not much direct link control is
needed, so this seldom applies. In addition, the schedule import/export mechanism allows for central
scheduling control using local (imported) schedule components.
However, if you are engineering applications between JACE stations that require link control, or do have
a use case where the Supervisor needs to globally write a value into the link logic in JACE stations, please
understand you must always use Niagara proxy points in this fashion.
Note: Avoid linking to action slots on Niagara proxy points, as such links can be lost. For more details, see the
“Best practices for Niagara proxy points” section “Avoid links to proxy point actions” on page 2-24.
Figure 2-31 Example use of Prepend text in Alarm Class property of NiagaraStation Alarms extension
When a store alarm comes to this Supervisor, the “prepend text” is added to the originating AlarmClass
in the JACE station, such that routing looks for that named AlarmClass, e.g. “storeB_humAlarm” or
“storeW_runAlarm”. This allows you to maintain the original alarm class mapping (at the “store station
level”) as well as segregate “by stores” at the Supervisor level.
Note: This feature does not automatically create “virtual” alarm classes in the Supervisor; you still have to
manually create all AlarmClass components needed. Also, check that AlarmClass component naming
matches whatever “Prepend” or “Append” scheme is configured under the NiagaraStations in the Super-
visor’s NiagaraNetwork.
NiagaraAX-3.7u1
2–26
Drivers Guide
Chapter 2 – Niagara Network Station Schedules import/export notes
June 5, 2013 Niagara schedule import/export default configuration
NiagaraAX-3.7u1
2–27
Drivers Guide
Niagara histories notes Chapter 2 – Niagara Network
Schedule Export Edit June 5, 2013
Note: The Edit dialog shows some configuration properties of the schedule export descriptor.
To access all properties, (including all status properties) go to its property sheet. Note that if you double-
click the Nav tree icon for an export descriptor, its property sheet displays.
The following related topics also apply:
• Niagara Schedule Export properties
• Niagara schedule import/export default configuration
• Also see “Manager table features” on page 1-21
Niagara Schedule Export properties
Properties in the Edit dialog of a schedule export descriptor are:
• Enabled
By default true. While set to false (export descriptor disabled), export connections are not attempted
to update the remote (imported) schedule.
• Execution Time
Determines when an export update is made to the remote (imported) schedule, providing that a con-
figuration change occurred in the local schedule that requires synchronization (export). For more de-
tails, see “Niagara schedule import/export default configuration” on page 2-27.Options are either
Daily, Interval (default), or Manual. If Manual, the following properties are unavailable:
• Time of Day (Daily)
Configurable to any daily time. Default is 2:00am.
• Randomization (Daily)
When the next execution time calculates, a random amount of time between zero milliseconds
and this interval is added to the Time of Day. May prevent “server flood” issues if too many
schedule exports are executed at the same time. Default is zero (no randomization).
• Days of Week (Daily and Interval)
Select (check) days of week for archive execution. Default is all days of week.
• Interval (Interval)
Specifies repeating interval for export execution. Default is every 15 minutes.
• Time of Day (Interval)
Specifies start and end times for interval. Default is 24-hours (start 12:00am, end 11:59pm).
NiagaraAX-3.7u1
2–28
Drivers Guide
Chapter 2 – Niagara Network NiagaraStation Histories features
June 5, 2013 Schedule Export Edit
Note: A “New Folder” button is available in this view for adding “archive folders”, to help organize history
import (or export) descriptors. Each folder has its own history manager views.
The following sections provide more details unique to Niagara history imports:
• Discovered selection notes
• Niagara History Import properties
• On demand properties in history import descriptors
• Using System Tags to import Niagara histories
NiagaraAX-3.7u1
2–29
Drivers Guide
Niagara History Import Manager Chapter 2 – Niagara Network
Discovered selection notes June 5, 2013
Histories under the same station name as the parent NiagaraStation (device) component are local
histories for that station. Histories under any other stations represent histories currently imported into
(or exported to) that station.
For example, discovered histories in Figure 2-34 for NiagaraStation subJACE_A include local histories
(expanded, top); another “imported” history from remote station subJACE_B is shown below.
Note: From any NiagaraStation, you can import both its “local” histories and already-imported histories, as
needed. However, unless circumstances warrant a “relay archive method,” it may be best to import histories
directly from the source station whenever possible.
Properties of Niagara History Import descriptors available in the Edit or Add dialog are as follows:
• Name
Name for the history import descriptor component. If discovered, typically left at default.
Note: Editing name does not affect name of the resulting history (imported into station).
• History Id
This property specifies the history name in the local station’s history space, using two parts: “/<sta-
tionName>” and “/<historyName>”. If learned, station name is “^” (see note below) and history
NiagaraAX-3.7u1
2–30
Drivers Guide
Chapter 2 – Niagara Network Niagara History Import Manager
June 5, 2013 On demand properties in history import descriptors
name reflects the source history name. Typically, you leave both fields at default values, or edit the
second (<historyName>) field only.
Note: The “^” character is basically a shorthand character to refer to the device name of the parent
container (NiagaraStation component). This may be useful if you have multiple JACEs with histories
named the same. You can create and configure a single History Import Descriptor and then duplicate
and paste it under the other stations without having to go in and change the station name each time.
Note for a Niagara System History Import descriptor, the History Id property is not applicable.
• Execution Time
Either Daily (default), Interval, or Manual. If Manual, properties below are not available:
• Time of Day (Daily)
Configurable to any daily time. Default is 2:00am.
• Randomization (Daily)
When the next execution time calculates, a random amount of time between zero milliseconds
and this interval is added to the Time of Day. May prevent “server flood” issues if too many his-
tory archives are executed at the same time. Default is zero (no randomization).
• Days of Week (Daily and Interval)
Select (check) days of week for archive execution. Default is all days of week.
• Interval (Interval)
Specifies repeating interval for archive execution. Default is every 15 minutes.
• Time of Day (Interval)
Specifies start and end times for interval. Default is 24-hours (start 12:00am, end 11:59pm).
• Enabled
Default is true. If set to false, does not execute import of history data.
• Capacity
Specifies local storage capacity for imported history, either as Unlimited or Record Count.
If set to Record Count, the following property is available:
• records
Number of records (samples) to locally store. Default is 500.
• Full Policy
Either Roll (default) or Stop. Applies only if capacity is set to record count
• If Roll, upon record count, oldest records become overwritten by newest records.
• If Stop, upon record count, importing stops (until history records are locally deleted).
• On Demand Poll Enabled
Either true (default) or false. See the next section “On demand properties in history import descrip-
tors” for more details on this property and the related poll frequency property.
• On Demand Poll Frequency
Either “Fast”, “Normal”, or “Slow”.
• System Tag Patterns
Specifies one or more text strings matched against text values in “System Tags” properties of remote
history extensions, where matching text patterns result in histories imported into the local history
space. For more details, see “Using System Tags to import Niagara histories” on page 2-31.
NiagaraAX-3.7u1
2–31
Drivers Guide
Niagara History Export Manager Chapter 2 – Niagara Network
Using System Tags to import Niagara histories June 5, 2013
For this to work, remote history extensions (for histories to be imported) must be configured with
“System Tags” property values. Refer to the NiagaraAX User Guide section “Configure history exten-
sions” for related details.
Instead of using the import manager’s “Learn Mode” (Discover) to add history import descriptors, you
manually add new descriptors in the Niagara History Import Manager, by selecting NiagaraSystem-
HistoryImport in the “Type” drop-down control of the New popup dialog, as shown in Figure 2-36.
Figure 2-36 Add New Niagara System History Import descriptors for System Tags feature
In the resulting New dialog for the system history export descriptors, you enter one or more “System
Tags” as a text pattern, as shown in Figure 2-37.
Figure 2-37 Enter “System Tags Pattern” text to match remote history extensions’ “System Tags” values
You can enter multiple System Tags in this “System Tag Patterns” property, using a semicolon (;) as
delimiter between System Tags. In addition, you can use an asterisk (*) as a “wildcard” when entering
System Tag values. In the Figure 2-37 example, the System Tag Patterns value is:
AHU*;Ltg1;Ltg3;Ltg5
In this example, remote points with history extensions configured with System Tags property values
including an entry beginning with “AHU” and/or “Ltg1”, “Ltg2”, or “Ltg3” will be imported after this is
added and executed. For example, history extensions with a System Tags property value of “AHU1”,
AHU_1”, “AHU 1”, “AHU2”, and so will be included, as well as those including “Ltg1”, “Ltg3” and “Ltg5”, but
ones with a System Tags value of only “Ahu1”, “Ltg2”, or “Ltg 1” will not.
Note that other properties like Execution Time, Capacity, Full Policy, and On Demand Poll Enabled apply
to the associated imported histories, and operate the same as for regular HistoryImport descriptors. See
“Niagara History Import properties” on page 2-30 for further details.
NiagaraAX-3.7u1
2–32
Drivers Guide
Chapter 2 – Niagara Network Niagara History Export Manager
June 5, 2013 Niagara History Export New
Each row typically represents a history export descriptor. Each descriptor specifies how data from a local
history is exported (“pushed”) from the station to a selected NiagaraStation, where it appears as a history.
Note: A “New Folder” button is available in this view for adding “archive folders”, to help organize history
export (or import) descriptors. Each folder has its own history manager views.
You use this view to create, edit, and delete history export descriptors. Each export descriptor you add
results in the creation of a Niagara history on that remote station.
Following station configuration, this view provides a status summary for exporting local histories. You
can also use it to issue manual “Archive” commands to one or more history descriptors. This causes an
export “push” of history data into the selected histories at the remote Niagara station.
Note: Only history export descriptors appear in the History Export Manager view—any other components that
may also reside under Histories do not appear. For example, you do not see the default “Retry Trigger”
component (see “About the Retry Trigger” on page 1-34), or history import descriptors. However, you can
use the Histories property sheet or the Nav tree to access these items.
The “capacity” and “fullPolicy” configuration for each “exported” history (as created on the remote station),
is determined by that target station’s NiagaraNetwork component’s “History Policies.” Before exporting any
histories, you should review and adjust these policies as needed. For more details, see “About History
Policies” on page 2-6.
At the bottom of the view, the button “New” lets you manually create new export descriptors in the
station. An “Edit” button lets you edit one or more export descriptors. Buttons “Discover,” “Add” and
“Match” are also available, (these work similarly as in the Point Manager). An “Archive” button is
available to manually export (push data) into one or more selected histories.
For more details, see:
• Niagara History Export New
• Niagara History Export Edit
• About History Export Discover and Match (Learn Process)
• Using System Tags to export Niagara histories
NiagaraAX-3.7u1
2–33
Drivers Guide
Niagara History Export Manager Chapter 2 – Niagara Network
Niagara History Export properties June 5, 2013
Edit
The Edit dialog appears with the export descriptor listed (Figure 2-39).
Note: The Edit dialog shows configuration properties of the history export descriptor, plus Name (equivalent to
the right-click Rename command on the descriptor). To access all properties, (including all status
properties) go to its property sheet.
The following section explains Niagara History Export properties:
NiagaraAX-3.7u1
2–34
Drivers Guide
Chapter 2 – Niagara Network Niagara History Export Manager
June 5, 2013 About History Export Discover and Match (Learn Process)
In Learn Mode, the two panes in the Histories Export Manager operate as follows:
• Discovered (top pane)
Lists all histories in the local station (station you are engineering).
• Database (bottom pane)
Lists history export descriptors and history system export descriptors currently in the station data-
base.
• Each NiagaraHistoryExport descriptor has a “one-to-one” associated history, exported to that
remote station.
• Each NiagaraSystemHistoryExport descriptor may, and often does, result in many associated
exported histories in the remote station. Note in the database pane you can spot these export
descriptors by the blank History Id value, as well as a text value in the “System Tag Patterns”
column.
Note: As necessary, drag the border between the two panes to resize. Also (at any time), toggle between the two-
pane Learn Mode and the single-pane (Database) view by clicking the Learn Mode tool in the toolbar, or
using the Learn Mode command in the Manager menu.
Add The Add button is available in Learn Mode when you have one or more items selected (highlighted)
in the top discovered pane. When you click Add, a dialog appears that allows you to edit properties before
the history export descriptor(s) are created in the station.
The Add dialog is identical to the history export descriptor Edit dialog. For details on properties, see
“Niagara History Export properties” on page 2-34.
NiagaraAX-3.7u1
2–35
Drivers Guide
Niagara History Export Manager Chapter 2 – Niagara Network
Discovered selection notes June 5, 2013
Figure 2-41 Add New Niagara System History Export descriptors for System Tags feature
In the resulting New dialog for the system history export descriptors, you enter one or more “System
Tags” as a text pattern, as shown in Figure 2-42.
Figure 2-42 Enter “System Tags Pattern” text to match local history extensions’ “System Tags” values
You can enter multiple System Tags in this “System Tag Patterns” property, using a semicolon (;) as
delimiter between System Tags. In addition, you can use an asterisk (*) as a “wildcard” when entering
System Tag values. In the Figure 2-42 example, the System Tag Patterns value is:
Lon;Occ*
In this example, any local points with history extensions configured with System Tags property values of
“Lon” and/or a value beginning with “Occ” will be exported after this is added and executed.
For example, history extensions with a System Tags property value of “Lon”, “Occupancy” or “OccUnocc”
will be included, but ones with a System Tags value of only “OcUnoc” or “Lon1” will not. Note that the
Execution Time property values apply to all associated exports, and operate the same as for regular
NiagaraHistoryExport descriptors. See “Niagara History Export properties” on page 2-34.
NiagaraAX-3.7u1
2–36
Drivers Guide
Chapter 2 – Niagara Network About Niagara virtual components
June 5, 2013 Niagara virtuals background
NiagaraAX-3.7u1
2–37
Drivers Guide
About Niagara virtual components Chapter 2 – Niagara Network
Licensing and application overview June 5, 2013
In AX-3.7, the basic operation of Niagara virtuals remains unchanged from AX-3.6 and AX-3.5. As shown
in Figure 2-43, a small “virtual” ghost ( ) is superimposed in the lower right over the “normal” icon for
each Niagara virtual component representing any component type—a visual reminder that you are
looking into the “virtual component space” representing that station.
However, the AX-3.7 and later virtual architecture provides the following benefits:
• “Niagara virtuals to Niagara virtuals” are now supported. For example, in an “n-tiered” system, a top-
tiered Supervisor station can map Niagara virtuals in a NiagaraStation in its NiagaraNetwork, which
in turn has mapped those Niagara virtuals in a lower-tiered NiagaraStation in its NiagaraNetwork,
and so on.
• “Niagara virtuals to other virtual gateways” are also supported. Currently this applies to the BACnet
driver, where each BacnetDevice has its own “Virtual” gateway that provides access to virtual com-
ponents that represent BACnet objects. Now you can map those Bacnet virtual components as Nia-
gara virtual components in that NiagaraStation. This saves having to create Bacnet proxy points in
JACE stations for Px access at the Supervisor.
• “Shorter virtual ord slot paths” now result, instead of the longer virtual ord syntax previously used
(which included encoded data on the target slot’s facets and config flags). Shorter slot paths were
necessary for the “virtuals to virtuals” feature. They also allow easy editing of slot paths, if necessary.
Simplified ords are facilitated by a “Virtual Cache” mechanism, which stores the extra data removed
from virtual slot paths. This virtual cache operates transparently, yet has a component interface and
Workbench view for advanced users who want to inspect or remove cached items. For related de-
tails, see “Ords for Niagara virtual components” on page 2-41 and “Niagara virtuals cache (Virtual
Policies)” on page 2-41.
The following sections provide more details about Niagara virtual components:
• Licensing and application overview
• About the Niagara Virtual Device Ext
• Security and Niagara virtuals
• Views on Niagara virtuals
• Actions on Niagara virtuals
• Niagara virtuals in Px views
• Spy page diagnostics for Niagara virtual components
• Localization support for Niagara virtuals
NiagaraAX-3.7u1
2–38
Drivers Guide
Chapter 2 – Niagara Network About Niagara virtual components
June 5, 2013 About the Niagara Virtual Device Ext
Additionally, Niagara virtuals provide a possible savings in engineering time—as point “discovery” and
creation using the Bql Query Builder is not needed. Instead, Niagara virtual components expand under a
NiagaraStation’s “Virtual” gateway to reflect the entire component tree of that station. So, you can simply
drag and drop virtuals onto Px pages and select appropriate settings in the popup “Make Widget” dialogs.
Further, writable Niagara virtuals provide user access to almost any property in a PxPage graphic, for
both display and adjustment purposes. For example, to expose the high alarm limit for point, you simply
expand its alarm extension to reveal child containers, drag the “Offnormal Algorithm” virtual onto the Px
page, and in the Make Widget popup dialog, select the ORD to the “High Limit” property. Any
beforehand creation of a Niagara proxy point for this is not needed.
For related details, see “Niagara virtuals in Px views” on page 2-46.
Niagara virtual component limitations
Because Niagara virtual components are not persisted in the Supervisor’s station database, links to and
from Niagara virtuals are not supported. Therefore, values from remote stations needed within station
logic require Niagara proxy points. A simple example is a Math “Average” object that averages zone
temperatures originating in different JACE stations. Each input to the Math object would require a
Niagara proxy point of the relevant (remote station) data source.
Point extensions under Niagara virtuals are also not supported, for example alarm and history exten-
sions—although from a “best practices” perspective, such extensions are often misplaced in Niagara
proxy points. For further details on this, see “Best practices for Niagara proxy points” on page 2-24.
For related details, see “About the Niagara Virtual Device Ext” on page 2-39 and “Views on Niagara
virtuals” on page 2-45.
Successfully accessing components under this Virtual device extension (gateway) dynamically adds them
as Niagara virtual components (or Niagara “virtuals”) while they are subscribed, but they exist only in
memory—they are not persisted in the station database like proxy points.
The following sections provide more details on a Niagara Virtual Device Ext:
• Niagara Virtual gateway requirements
• Gateway memory cache and Refresh action
• Ords for Niagara virtual components
NiagaraAX-3.7u1
2–39
Drivers Guide
About Niagara virtual components Chapter 2 – Niagara Network
About the Niagara Virtual Device Ext June 5, 2013
In the Figure 2-45 example, the Virtual gateway has a fault status, because even though the station’s
“Virtuals Enabled” property is true, the (JACE) host running this station is not licensed for Niagara
virtuals. In this typical case, note the last failure cause is “Niagara Virtual Components are not licensed”.
In a Supervisor station, this specific failure cause would not appear. However, a Niagara Virtual gateway
in a Supervisor station may still be disabled, or even in fault (say, if the remote station is running an earlier
release than AX-3.4). In any case, the failure cause property provides fault details.
Gateway memory cache and Refresh action
As a Baja virtual gateway, the Niagara Virtual device extension (gateway) only loads what is needed or
“activated” as virtual components in the memory space of the station. For a general overview of how this
works, see “Gateway activation” on page 1-24.
Activation of a Niagara Virtual gateway results in some number of “client virtual subscriptions” to those
components exposed, when any of the following occurs:
• a Virtual device extension is expanded in the Workbench Nav tree
• a Virtual device extension is expanded in its Workbench property sheet
• a Px page (with widgets bound to virtuals under that Virtual device extension) is being viewed
By default, viewing activated components places them in a memory “cache” for some minimum number
of seconds, where they remain as long as being active (viewed). After some maximum time of inactivity,
virtual components may be automatically deleted from this cache. By default, inactive cache life is around
1 minute. In most cases default settings are fine, however, associated cache settings for Niagara virtual
components can be adjusted by changing entries in the host’s (Supervisor’s)
!\lib\system.properties file. Note this file includes self-doc comments about these settings.
Starting in AX-3.7, the cache of frequently-accessed virtual components is also stored persistently in one
or more files under the station’s folder, by default. These “niagara virtual archive” (.nva) files are under
the station’s default niagaraDriver_nVirtual subfolder, as cache1.nva, cache2.nva, and so
on. They are typically created upon station shutdown, and provide faster Px page loading. For related
details, see “Niagara virtuals cache (Virtual Policies)” on page 2-41.
NiagaraAX-3.7u1
2–40
Drivers Guide
Chapter 2 – Niagara Network About Niagara virtual components
June 5, 2013 Ords for Niagara virtual components
Refresh action Note that if engineering a job while configuration changes are still being made in a
remote station (modeled by a NiagaraStation in the local NiagaraNetwork), it may be necessary to use the
right-click Refresh action on the Virtual device extension, as shown being done in Figure 2-46.
As a general rule, a Refresh action is typically needed when the Nav tree contents for Niagara virtuals
under a station are “not as expected.”
Figure 2-47 Example Virtual Policies container expanded in Supervisor station to show “Cache” child
For a Supervisor’s NiagaraNetwork, or for any station’s NiagaraNetwork that uses Niagara virtuals, this
provides access to cached data on virtual components. For any other station (typically most JACE
stations), the Virtual Policies container is unused.
From the NiagaraNetwork’s property sheet, expand this container to see the available configuration
properties and contained “Cache”. Typically, default property values are appropriate—for more details see
“Virtual Policies (Cache) properties” on page 2-43.
Double-click the Cache component for its default Niagara Virtual Cache View.
NiagaraAX-3.7u1
2–41
Drivers Guide
About Niagara virtual components Chapter 2 – Niagara Network
Niagara virtuals cache (Virtual Policies) June 5, 2013
Figure 2-48 Niagara Virtual Cache View of NiagaraNetwork’s cache (with station select drop-down)
As shown in Figure 2-48, this tabular view lists cached virtual ords, with a “station select” control at the
top. Click row(s) to select, where bottom buttons let you “Select All”, or “Remove” any selected.
Double-click a row for a popup dialog showing details on data in the virtual cache (Figure 2-49).
Figure 2-49 Double-click row in Niagara Virtual Cache View for cached details on virtual component
As shown, cached data includes a target’s slot ord along with information on its facets and slot flags.
In the case where a recent facets or slot flag change has been made to the target of a virtual component,
but is not properly reflected in Px access of it, you can check it here. If necessary, you can then remove it
from the virtual cache, so it can properly update and be added back in the cache upon next access.
The ““Select All” is available too, which can be used “NiagaraStation-wide” to remove all cached
data. In addition, note the Cache component has a right-click “Clear” action, accessible on the Niagar-
aNetwork’s property sheet (Figure 2-50).
Figure 2-50 Clear action on Virtual Policies > Cache component, from NiagaraNetwork property sheet
Clear removes cached data for virtual components across all NiagaraStations in the NiagaraNetwork.
NiagaraAX-3.7u1
2–42
Drivers Guide
Chapter 2 – Niagara Network About Niagara virtual components
June 5, 2013 Niagara virtuals cache (Virtual Policies)
Properties and slots of the Niagara Virtual Cache Policy are as follows:
• Cache Type
Cache type has two fields as drop-down choices, with currently the first fixed at niagaraVirtual,
and the second as either:
• DefaultNiagaraVirtualCache — (default), so that ords and associated data from ac-
cessing virtual components is cached in memory. Also see “Gateway memory cache and Refresh
action” on page 2-40.
• NullCache — Nothing is cached. In this case, virtual component access results in more net-
work traffic and graphics will be slower to load.
• Cache
Cache is the component representation of virtual cache data, with the default “Niagara Virtual
Cache View” (see Figure 2-48 on page 42). Configuration properties include:
• Persist Cache — (default is true) where some slot data for virtual components is stored
persistently in cacheN.nva file(s) in the specified “Cache Directory”. Typically these file(s)
are created upon station shutdown. If set to false, the virtual cache is not persisted.
• Cache Directory — The file ord to the folder in which cacheN.nva file(s) are created
when “Persist Cache” is true. The default ord is file:^niagaraDriver_nVirtual
Note persisted virtual cache files can also be examined in Workbench by double-clicking for a “Nva File
View”, as shown in Figure 2-52.
Figure 2-52 Workbench provides a default “Nva File View” for persisted Niagara virtual cache files.
In the example station shown above there is only one virtual cache file. However, a large Supervisor
station will typically have multiple virtual cache files.
NiagaraAX-3.7u1
2–43
Drivers Guide
About Niagara virtual components Chapter 2 – Niagara Network
Security and Niagara virtuals June 5, 2013
This property must be true before Niagara virtual components for this station can be accessed. However,
it defaults to false as a security measure, as when initially enabled, all components in the remote station
will be accessible to any user with permissions on this NiagaraStation’s Virtual gateway.
For example, any user with admin-level permissions on the gateway will be able to view all Users in the
remote station’s UserService, simply by expanding the property sheet of the virtual component for the
UserService. Or, such a user will be able to expand the station’s PlatformServices virtual component, and
have access to critical properties or child services (including an action to restart station/reboot host)!
Therefore, before enabling any Niagara Virtual gateway, it is recommended to restrict user access to it,
(assign it to a restricted category). Then, on Niagara virtual components under that gateway, assign
appropriate category(ies)—before re-assigning less restrictive category(ies) to the parent gateway.
Note: Persisted category assignments for individual virtual components is an important feature. Originally, only
the persisted parent virtual gateway component could be assigned to one or more categories—with all child
virtual components inheriting those categories. Now with this feature, categories assigned to active virtual
components are safely maintained even after virtuals are removed from the station’s memory cache.
Assign categories to Niagara virtual components using “normal component” methods, that is either by
• with the Virtual gateway expanded, right-click any child virtual to go to its Category Sheet view, or:
• using the Category Browser view of the local station’s CategoryService, expanding the Niagara Vir-
tual gateway under any NiagaraStation in the NiagaraNetwork (see Figure 2-54).
Figure 2-54 Assigning categories to Niagara virtual components using the Category Browser
Be careful to Save after making any needed category changes to the Niagara Virtual gateway or virtual
components. For more information about station security and categories, refer to the sections “About
Security” and “CategoryService” in the NiagaraAX User Guide.
NiagaraAX-3.7u1
2–44
Drivers Guide
Chapter 2 – Niagara Network About Niagara virtual components
June 5, 2013 Views on Niagara virtuals
Virtual Info properties are in addition to the properties of the target source component, and include:
• Virtual Status
For any Niagara virtual component, this is status of that virtual—not to be confused with whatever
status the source (remote station) component may currently have. This status always appears on
property sheets with a “vStatus” descriptor.
For example, a virtual for a proxy point may have an “ok” virtual status, yet show an “Out” with a
status of “down”. Virtual status may initially show with a status of “stale,” before changing to “ok”.
• Type Specification
Reflects the moduleName:componentType used for representing the Niagara virtual, for example in
the property sheet view for that component.
• Slot ORD
The complete slot path to the remote source component (relative to that station’s slot hierarchy),
where the leading “root” portion is station:|slot:
• Last Failure Cause
The last unsuccessful attempt (if any) to activate this virtual component is explained in this text
string property. May be blank if no previous unsuccessful attempt has occurred.
Note: Where a “Niagara virtual to virtual” path is used, a past unsuccessful attempt may include
information about the comm error, for example:
Station Err: (J7_Bnet_36 -> J202_TestW) {down}
• Gateway
The “handle” ord of the virtual gateway within the station’s virtual component space.
Note that additional “spy” page information for Niagara virtuals is also available for any station. For more
details, see “Spy page diagnostics for Niagara virtual components” on page 2-48.”
NiagaraAX-3.7u1
2–45
Drivers Guide
About Niagara virtual components Chapter 2 – Niagara Network
Actions on Niagara virtuals June 5, 2013
Figure 2-56 Right click actions of source component available through Niagara virtual
Consider this when assigning categories to Niagara virtuals. Users with invoke (I) permissions (at
operator or admin levels) on those categories will have access to those associated actions. See “Security
and Niagara virtuals” on page 2-44 for related information.
Figure 2-57 Drag and drop of Niagara virtual onto Px page produces Make Widget dialog
NiagaraAX-3.7u1
2–46
Drivers Guide
Chapter 2 – Niagara Network About Niagara virtual components
June 5, 2013 Niagara virtuals in Px views
Figure 2-58 Double-click ORD field at top of Make Widget dialog if making Bound Label, to select property
In Figure 2-58, note the selection of a specific property of the Niagara virtual, via the Select Ord popup
dialog. In this example, the selected component is expanded, and the property “Out” is selected.
To do this, double-click in the top field (Ord) of the Make Widget dialog after dropping the Niagara
virtual. This produces the Select Ord dialog, where you then expand the selected virtual to select its
desired property (as shown being done in the Figure 2-58 example). Double-click to select and close.
If only the value or status is wanted for display in a bound label, you could further expand the Out
property in the Select Ord dialog, and select either Value or Status.
Note: You must further select “Value” or “Status” of a selected “Out” property in the case of a selected “virtual to
virtual”. Otherwise, the Px value displayed is likely to be: vStatus {ok} (instead of the end target value).
Alternatively, you can edit the BoundLabel Px widget’s BFormat (text) property value from the default: %.%
to instead: %value% or %status% or %value% %status%.
For example, consider the Figure 2-59 Px example of two groups of 3 bound labels. each bound to a
Niagara virtual representing a control point.
Figure 2-59 Example Bound Labels on Px page using different property depth
• In the top group, as each bound label was added the “Out” property of that Niagara virtual compo-
nent was selected in the Select Ord dialog.
• In the bottom group, as each bound label was added the “Out” property of that virtual component
was expanded in the Select Ord dialog, and only the “Value” portion selected. Note that status
information does not appear in these real-time values, and the “Status” (Color) option does not work.
NiagaraAX-3.7u1
2–47
Drivers Guide
About Niagara virtual components Chapter 2 – Niagara Network
Spy page diagnostics for Niagara virtual components June 5, 2013
In the two preceding examples, the BFormat text formatting (Format Text or “text” property), was left at
a default %.% value. If needed, support for additional text formatting is also available Bound Labels. For
example, to add leading (or trailing) static text around the %scripted% portion, after adding the widget.
However (and again), it is more efficient to specify a particular property of a Niagara virtual (initially, in
the Select Ord dialog before widget creation). That is, instead of adding a Px widget bound to the root
of a Niagara virtual component, and then specifying a particular %propertyName% in its text formatting.
In the case of doing the latter, note that all properties of the target Niagara virtual become loaded from
viewing the Px page, not just the single property specified by the text formatting. You can use “spy”
troubleshooting pages to observe this behavior as applied to Niagara virtual components. See the next
section, “Spy page diagnostics for Niagara virtual components”.
Px usage of Niagara virtuals for write adjustments
Niagara virtuals have writable properties. This lets you expose properties that may need adjustment onto
PxPages, using standard “drag and drop” techniques. Figure 2-60 shows such an example, for an alarm
limit for a proxy point’s alarm extension.
If users have proper permissions on such Niagara virtuals, they can make configuration changes that are
written to the (actual) source component.
Note the “class types” for Niagara virtuals include a Niagara virtual type for each of the eight NiagaraAX
“control points”, that is a NiagaraVirtualNumericPoint, NiagaraVirtualNumericWritable,
NiagaraVirtualBooleanPoint, and so on. This results in better overall handling of point status in
Niagara virtuals on PxPages. This also helped in development of the “export tags” feature for Supervisor
auto-configuration, first introduced in AX-3.5. Starting in AX-3.7, these eight class types are sourced
from the niagaraVirtual module, along with most other Niagara virtual component types and views.
NiagaraAX-3.7u1
2–48
Drivers Guide
Chapter 2 – Niagara Network About Niagara virtual components
June 5, 2013 Spy page diagnostics for Niagara virtual components
Figure 2-61 Starting spy to look at Supervisor’s usage of Niagara virtual components
Figure 2-62 Continuing spy to look at Supervisor’s usage of Niagara virtuals to one remote station
As shown in Figure 2-62, in the stations list table, under the “Niagara Virtuals Client” column, you can
see a count of active Niagara virtual components for each station in the Supervisor’s NiagaraNetwork. If
you click on a link in that count column, you see a page of the Niagara Virtual components, by name,
including a “Virtual Prop Count” column that lists the number of associated properties for each one
(ideally 1 property maximum for each one, for performance reasons).
Finally, as shown near the bottom of Figure 2-62, if you click a specific Niagara Virtual (by Name link),
you see its “Virtual Info” properties, as well as subscription statistics on however many of its properties
are currently subscribed (or attempted for subscription).
NiagaraAX-3.7u1
2–49
Drivers Guide
About Sys Def components Chapter 2 – Niagara Network
Localization support for Niagara virtuals June 5, 2013
Note: There is also a “Points/Virtual Server” column near the right side of the spy:/niagaraNetwork/stations page
(at top of Figure 2-62). That column lists the count of active (remote) client subscriptions to components in
the station. In a typical Supervisor station, this count will be 0 for each station in its NiagaraNetwork.
However, you can open a spy session to any selected JACE station and observe this count, as well as subse-
quent links in a similar manner as shown in the preceding figures. Investigation on the “JACE” station side
may also be helpful in resolving issues.
NiagaraAX-3.7u1
2–50
Drivers Guide
Chapter 2 – Niagara Network About Sys Def components
June 5, 2013 About network level Sys Def components
Figure 2-64 Sys Def Device Ext’s Role Manager property sheet
The key configuration property, Desired Role, defines the relationship of the remote station (repre-
sented by the parent NiagaraStation) to the local station, as either:
• Peer — (default) a “peer” relationship exists.
• Supervisor — the remote station represented by this NiagaraStation is this station’s Supervisor.
• Subordinate — the remote station represented by this NiagaraStation is this station’s subordinate
(typically a JACE station).
About the Sync Task
The Sync Task of a Niagara Sys Def Device Ext reflects two configuration properties, as well as several
read-only status properties (Figure 2-65).
Figure 2-65 Sys Def Device Ext’s Sync Task property sheet
This component propagates changes to Sys Def components between stations that are in a “Supervisor/
subordinate” relationship.
NiagaraAX-3.7u1
2–51
Drivers Guide
About Sys Def components Chapter 2 – Niagara Network
About network level Sys Def components June 5, 2013
These properties would be “sync’ed up” to a remote Supervisor station (if the local station was defined as
its subordinate).
About the Sys Def Provider
The Sys Def Provider (BogProvider) is the API handler for queries about the “Sys Def ” hierarchy, for any
remote stations configured as subordinates. It persists these definitions with child “ProviderStations” in
the local station’s bog file (as the default storage method).
Figure 2-68 Sys Def “Sys Def Provider” (default) expanded in NiagaraStation property sheet
As shown in Figure 2-68, by default the property sheet of a NiagaraNetwork’s Sys Def Provider
shows only two status properties: Status and Fault Cause. However, if you go to its slot sheet you may
see additional child “ProviderStation” components.
Figure 2-69 Sys Def Provider slot sheet, clearing “Hidden” config flag of ProviderStation
To see child properties of a ProviderStation, you can clear the “Hidden” config flag, as shown being done
in Figure 2-69. In turn, the slot sheet of that component also has more hidden properties. Note a child
ProviderStation for the local station is also included under any Sys Def Provider (BogProvider).
Note: An alternative configuration of ProviderStation persistence is possible, using an Orion database instead
of station bog storage. That topic is outside the scope of this document.
NiagaraAX-3.7u1
2–52
Drivers Guide
Chapter 2 – Niagara Network About the Files extension
June 5, 2013 About the Niagara File Manager
Using the hierarchical definitions in the “Role Manager” components of Sys Def device extensions in
NiagaraStations (in the NiagaraNetwork of various distributed stations), Sys Def information can be
automatically “sync’ed upwards” from the lowest subordinate level stations to “intermediate level”
stations. Such stations would be defined as the Supervisor to some stations, but as a subordinate to
another station. In this way, “master” Supervisors could have the latest Sys Def information about many
more stations than the ones represented locally. This information could be useful in system queries.
Sys Def information includes various connection-type property values, what different services the station
is running, and a number of other items,
Figure 2-70 Niagara File Manager view of a NiagaraStation’s Files device extension
Figure 2-70 shows the view in “learn mode” after a Discover was issued, with files found on the remote
station in the top pane, and existing file import descriptors in the lower pane.
Click to select either an entire directory of files to import (as shown selected/highlighted in Figure 2-70),
or select a single file to import. If a selected directory, its entire contents are imported upon execution—
this means all contained files, plus any subdirectories and their contained files.
Note: Hold down the Ctrl key while clicking in the Discovered pane to make multiple selections. Each will have a
separate “Files” property to review/edit in the resulting Add dialog for the Niagara file import descriptor.
With one or more rows in the Discovered pane selected, click the Add button. The Add dialog appears.
See Add or Edit dialog for NiagaraFileImports.
NiagaraAX-3.7u1
2–53
Drivers Guide
About the Files extension Chapter 2 – Niagara Network
Add or Edit dialog for NiagaraFileImports June 5, 2013
Figure 2-71 Add dialog for Niagara FileImport in the Niagara File Manager
Properties of file import descriptors available in the Add (or Edit) dialog are as follows:
• Name
Name for file import descriptor component. Often left at default, i.e. “NiagaraFileImport” (ap-
pending numerals if needed to keep unique). Editing name does not affect names of imported file(s).
• Execution Time
Either Daily (default), Interval, or Manual. If Manual, properties below are not available:
• Time of Day (Daily)
Configurable to any daily time. Default is 2:00am.
• Randomization (Daily)
When the next execution time calculates, a random amount of time between zero milliseconds
and this interval is added to the Time of Day. May prevent “server flood” issues if too many file
imports are executed at the same time. Default is zero (no randomization).
• Days of Week (Daily and Interval)
Select (check) days of week for import execution. Default is all days of week.
• Interval (Interval)
Specifies repeating interval for import execution. Default is every minute.
• Time of Day (Interval)
Specifies start and end times for interval. Default is 24-hours (start 12:00am, end 11:59pm).
• Enabled
Default is true. If set to false, file import does not occur.
• File Overwrite Policy
The File Overwrite Policy property determines how existing files are overwritten upon any execu-
tion, with two choices available:
• Checksum
(Default) Checksums for all files are compared for differences. Any files found with differences
are re-imported, overwriting the existing files.
• Last Modified
Date/timestamps for all files are compared. Any remote files found with more recent date/
timestamps are re-imported, overwriting the existing files.
• Files
Specifies the selected “Remote file” (or directory) and the “Local file” (or directory), as a pair.
Note: Click the . (add) icon, if needed, for additional pairs, or the (delete) icon to remove a pair.
Both fields in any pair should begin with “file:^”, denoting the root of the station’s file space.
The two Files fields are:
• Local file
File path for directory or file to be written in the local station’s file space.
If adding from a Discover, this matches the initially selected “Remote File” value. If local direc-
tories used in the file path do not exist, they are created upon import execution.
To change, you can click one of the right-side controls, either:
– if a single file, the folder control for the File Chooser dialog.
NiagaraAX-3.7u1
2–54
Drivers Guide
Chapter 2 – Niagara Network About the Files extension
June 5, 2013 About Niagara FileImport properties
NiagaraAX-3.7u1
2–55
Drivers Guide
About the Files extension Chapter 2 – Niagara Network
About Niagara FileImport properties June 5, 2013
NiagaraAX-3.7u1
2–56
Drivers Guide
CHAPTER 3
Field Bus Integrations
For purposes here, a field bus integration is any NiagaraAX driver besides the niagaraDriver (Niagara
Network). All Niagara AX drivers resemble each other in basic architecture, including the Niagara
Network. For more details, see “About Network architecture” on page 1-2, and “About the Niagara
Network” on page 2-1.
Field bus integrations such as BACnet, LON, Modbus, as well as various “legacy” drivers (typically serial-
connected) each have unique characteristics and features. This section provides a collection of topics that
apply to some of these drivers.
The following main sections are included:
• Port and protocol variations
• Learn versus New devices and points
• Serial tunneling
Ethernet-connected driver
Many field bus drivers are Ethernet (port) connected, typically using some TCP/IP protocol for transport.
For example, the Modbus TCP driver (ModbusTcpNetwork) uses the Modbus TCP protocol—essentially
the Modbus protocol “wrapped” in TCP/IP. The SNMP driver (SnmpNetwork) uses SNMP, an appli-
cation-layer protocol within the TCP/IP protocol suite. These and other Ethernet-connected drivers
operate from a single Ethernet port without difficulty, due to the high bandwidth and efficiencies of IEEE
802 network (and TCP/IP) standards.
In addition to JACE platform usage, Ethernet-connected drivers are available for the Supervisor (PC)
platform as well, for “direct device integrations.” These are specially-licensed versions of the Supervisor
(by default, a Supervisor is licensed only for JACE device communications, via the NiagaraNetwork).
Serial-connected driver
Serial-connected drivers use a specific serial port on the host (JACE) platform. For example, the Modbus
serial driver (ModbusAsyncNetwork) requires association with a specific COMn port on the JACE, which
you do from the property sheet of this network component (Figure 3-1).
Note: Only one network can be assigned to any one serial port (COMn) of the host JACE platform. That driver
network essentially “owns” that communications port.
NiagaraAX-3.7u1
3–1
Drivers Guide
Learn versus New devices and points Chapter 3 – Field Bus Integrations
Special-port driver June 5, 2013
Slots under the “Serial Port Config” (SerialHelper) must be set to match the communications parameters
of other devices on the attached network. Note that in this ModbusAsync example, you also select either
the Modbus ASCII or Modbus RTU protocol (the driver supports either one, which you set according to
the type of networked Modbus devices).
Often, serial-connected drivers support “legacy type” device networks. In this case, the “serial tunneling”
feature may be useful to run vendor-specific legacy Windows applications to do device configuration and
maintenance (all from an IP station connection). See “Serial tunneling” on page 3-2.
Special-port driver
JACE controllers may include one or more special-use ports, for example one or more Echelon (LON)
FTT-10 ports. Usage of such a port requires a specific driver. In this example, the Lonworks driver (each
LonNetwork) associates with a specific LONn port, which is configured under that network component.
For details, see the Lonworks Guide.
Other special-use ports may appear as the evolution of JACE products continue.
Serial tunneling
A NiagaraAX station running one or more serial-based drivers can provide “tunneling” access to its
connected devices. This allows you to use a vendor’s Windows serial-based application (via the serial
tunnel client) to perform device-specific operations. Examples include application downloads or other
device configuration.
The tunneling client is separate from NiagaraAX Workbench—meaning that you can install it on various
PCs, as needed. The key advantage is that serial tunneling requires only a standard IP connection (to the
station), yet provides access as if the client PC was attached to the target serial network via a physical
COM port, for example RS-232.
Note: No special licensing is required to use tunneling features in NiagaraAX.
The following sections provide more details:
• Serial tunnel overview
• Client side (PC application)
• Station side (TunnelService)
• Serial tunneling usage notes
NiagaraAX-3.7u1
3-2
Drivers Guide
Chapter 3 – Field Bus Integrations Serial tunneling
June 5, 2013 Serial tunnel overview
Depending on your client PC’s Windows operating system, select to install from one of the following:
• Windows XP: Install_Serial_Tunnel.exe
• Windows 7 or Windows Vista (32-bit): InstallVserialAx2.exe
• Windows 7 or Windows Vista (64-bit): InstallVserialAx2_64bit.exe
See the following additional sections for more details:
• Installing the serial tunnel client
• Serial tunnel client configuration
• Serial tunnel client installation details
Installing the serial tunnel client
NiagaraAX-3.7u1
3-3
Drivers Guide
Serial tunneling Chapter 3 – Field Bus Integrations
Client side (PC application) June 5, 2013
Do not duplicate any existing serial (COMn) port already used by Windows. You can always reconfig-
ure again, by returning via the Windows XP Control Panel. If left “interactive” all remaining
fields in this dialog are editable each time you specify this COM port from the serial application that
you are tunneling (this popup dialog reappears each time).
• In the Windows 7 / Vista serial tunnel client dialog, an unused port (COMn) is already selected, as
shown in Figure 3-4
If left ‘interactive”, you can configure the remaining fields in this dialog each time you specify this
COM port from your serial application you are tunneling (this popup dialog reappears each time).
See “Serial tunnel client configuration” on page 3-5 for details on all fields in this dialog. Parameters work
essentially the same whether you have the Windows XP driver or (either) Windows 7/Vista driver.
Step 4 Click OK to finish the install.
The “Installation Finished” dialog appears—click OK again. See “Serial tunnel client installation details”
on page 3-5 for a listing of installed components.
NiagaraAX-3.7u1
3-4
Drivers Guide
Chapter 3 – Field Bus Integrations Serial tunneling
June 5, 2013 Client side (PC application)
Caution The TunnelService uses “basic authentication” for login on any client connection (serial tunnel or Lon
tunnel), so we recommend you create a special user in the station to use (only) for all serial or Lon tunnel
access. For configuration details, see “Best security practices for tunneling” on page 3-8.
• Password
Password for this station user.
Note: The password is not encrypted when passed to the station (see Caution above).
• Interactive (checkbox)
If checked, this dialog reappears each time a serial-based application first opens this “virtual” COM
port. If cleared, this dialog displays only if an open fails to establish a connection to the tunnel server
(as stored from last entry). Typically, you leave Interactive checked (regardless of driver version).
• In the case of the Windows XP driver, when this dialog appears interactively, the Serial
Port setting is read-only. To change it, you must access the Serial Tunneling applet from the
Windows XP Control Panel.
• In the case of either Windows 7/Vista driver, the Serial Port setting is always read only. How-
ever, a second “No Retry” checkbox becomes available if you clear “Interactive”.
Serial tunnel client installation details
Depending which type of serial tunnel client you installed, the driver’s Windows interface, files installed,
and method of uninstalling vary.
• Windows XP serial tunnel client details
• Windows 7 / Vista serial tunnel client details
Windows XP serial tunnel client details The Windows XP serial tunnel client installs as a Windows
service (NiagaraAX Serial Tunnel), and has a Control Panel applet available (Figure 3-7).
NiagaraAX-3.7u1
3-5
Drivers Guide
Serial tunneling Chapter 3 – Field Bus Integrations
Client side (PC application) June 5, 2013
Figure 3-7 Windows XP control panel applet for NiagaraAX serial tunnel client
The following files are installed, with services referenced in the Windows registry:
• Control Panel
<WINDOWS_SYSTEM_DIR>\vserax.cpl
• Network Tunnel Service
<WINDOWS_SYSTEM_DIR>\vserax.exe
Service name: vseraxSvc (vserax dependency)
• Serial Driver Service
<WINDOWS_SYSTEM_DIR>\drivers\vseraxx.sys
Service name: vserax
• Uninstaller
<WINDOWS_SYSTEM_DIR>\vseraxun.exe
If necessary, uninstall the serial tunnel client driver using the “Add or Remove Programs” entry from the
Windows XP Control Panel.
Note: If uninstalling, and the uninstall appears to fail, try reinstalling the tunnel client, and then uninstall again.
Windows 7 / Vista serial tunnel client details The serial tunnel client for Windows 7/Vista installs as a
“virtual” serial COM port (no separate Windows service or Control Panel applet). You can this port listed
in the Windows Device Manager under Ports (Figure 3-8 ).
Figure 3-8 Windows 7 / Vista “virtual” COM port for the NiagaraAX serial tunnel client
From the Windows Device Manager, you can right-click this virtual COM port to see its
Properties, or if necessary, to Uninstall the driver. From its Properties dialog, the “Driver” tab and
“Driver Details” button provides a popup dialog that lists the various files installed (Figure 3-9).
Figure 3-9 Driver file details for NiagaraAX serial tunnel client (Windows 7 / Vista)
The file shown highlighted (vseraxConfig.exe) produces the tunnel client’s configuration dialog.
NiagaraAX-3.7u1
3-6
Drivers Guide
Chapter 3 – Field Bus Integrations Serial tunneling
June 5, 2013 Station side (TunnelService)
Figure 3-10 TunnelService with child SerialTunnel (copied from tunnel palette)
Here, type in the COMn “Port Name” used by the target driver network, and specify other parameters as
defined in the network’s configuration. Port Name should be similar to COM1 or COM2, and so on.
See SerialHelper on page 11 for details on various properties.
Note: If the JACE has multiple serial-type driver networks (and corresponding COM ports), you can copy the
same number of SerialTunnel components under the TunnelService. You can then associate each Serial-
Tunnel with a particular network (by its COMn port name), and set the other parameters accordingly.
Step 5 Save the TunnelService configuration when done.
A station user requires admin write permissions for the SerialTunnel(s) to allow tunnel client access.
Note: Clients that access the TunnelService (both SerialTunnel and LonTunnel) use “basic authentication” for
login access. So in either of these client connections, the user credentials passed to the station are not
encrypted—a potential security issue! See “Best security practices for tunneling” on page 3-8.
Also, consider disabling the TunnelService (set its Enabled property to false) whenever it is not needed.
NiagaraAX-3.7u1
3-7
Drivers Guide
Serial tunneling Chapter 3 – Field Bus Integrations
Station side (TunnelService) June 5, 2013
Assign this new user admin write permissions to that one category, and Save that user.
Step 3 From any client to the TunnelService (serial tunnel or Lon tunnel), only use this special user account.
This workaround provides full tunneling capability, but minimizes the security risk in case the credentials
for this special user become compromised.
NiagaraAX-3.7u1
3-8
Drivers Guide
Chapter 3 – Field Bus Integrations Serial tunneling
June 5, 2013 Serial tunneling usage notes
In the Figure 3-11 example, the remote host that is currently serial tunneling is “192.168.1.31.” When
a tunnel connection is terminated, this Tunnel Connection component is removed.
In addition to its statistical properties, a TunnelConnection has an available Disconnect action. This
disconnects the active tunnel connection, removing the parent TunnelConnection component. A popup
dialog “Connection closed by remote host” is seen on the client tunnel side.
Figure 3-12 Example Windows XP serial application (Hyperterminal) initiating tunnel connection
NiagaraAX-3.7u1
3-9
Drivers Guide
Serial tunneling Chapter 3 – Field Bus Integrations
Serial tunneling usage notes June 5, 2013
Speed of the tunnel connection may be slower that a direct serial connection, due to the overhead of
wrapping and unwrapping messages in Niagara Fox and TCP/IP protocols.
Tunneling client messages
When using a tunnel client, the specified user must be “authenticated” by the station before a connection
is granted (established). If the user is not found, or if the entered password is incorrect, a popup message
may appear on the client PC (Figure 3-13). Note the User Name and Password are both case-sensitive.
Caution As previously cautioned, note that “basic authentication” is used in any client connection to the station’s
TunnelService, with possible security consequences. See “Best security practices for tunneling” on page 3-8.
Currently, only one tunnel connection is allowed per SerialTunnel. If another client application attempts
a connection to that tunnel, a popup message may appear on that PC (Figure 3-14).
NiagaraAX-3.7u1
3-10
Drivers Guide
CHAPTER 4
Plugin Guides
There are many ways to view plugins (views). One way is directly in the tree. In addition, you can right-
click on an item and select one of its views. Plugins provide views of components.
In Workbench, access the following summary descriptions on any plugin by selecting Help > On View
(F1) from the menu, or pressing F1 while the view is open.
NiagaraAX-3.7u1
4–1
Drivers Guide
Plugins in niagaraDriver module Chapter 4 – Plugin Guides
June 5, 2013
driver-DeviceManager
The Device Manager plugin allows you to create and manage devices. The DeviceManager is a view
on a network component in a station. For details, see “About the Device Manager” on page 1-14.
driver-DriverManager
The Driver Manager plugin allows you to add and view drivers. It is available on the DriverCon-
tainer. For more details, see “About the Driver Manager” on page 1-4.
driver-FileDeviceManager
The File Device Manager is the default view on a FileNetwork. For general information, see “About
the Device Manager” on page 1-14. See the section “driver-FileNetwork” on page 5-4 for more details
about a File Network. Also, note that the view used most often under a File Network is the DelimitedFile-
ImportManager view of the Histories extension of a FileDevice.
driver-HistoryExportManager
History Export Manager provides a view of NiagaraHistoryDeviceExt. To see the view, right-click on
a NiagaraHistoryDeviceExt in a Station and select Views > History Export Manager. For
details, see “Niagara History Export Manager” on page 2-32.
driver-HistoryImportManager
History Import Manager provides a view of NiagaraHistoryDeviceExt. To see the view, right-click
on a NiagaraHistoryDeviceExt in a Station and select Views > HistoryImportManager. For
details, see “History Import Manager” on page 1-46.
driver-PointManager
Point Manager provides an overview of the proxy points mapped into the PointDeviceExt. In a
NiagaraNetwork, PointManager is a view on the NiagaraPointDeviceExt of a NiagaraStation. To see
the PointManager, right-click on a Points device extension and select Views > Point Manager. For
details, see “About the Point Manager” on page 1-37.
NiagaraAX-3.7u1
4-2
Drivers Guide
Chapter 4 – Plugin Guides Plugins in niagaraVirtual module
June 5, 2013
niagaraDriver-NiagaraScheduleExportManager
The Niagara Schedule Export Manager is used to manage schedules exported from a
NiagaraStation. For more details, see “Schedule Export Manager” on page 1-53, and also “Station
Schedules import/export notes” on page 2-27.
niagaraDriver-NiagaraScheduleImportManager
The Niagara Schedule Import Manager is used to manage schedules imported from a
NiagaraStation. For more details, see “Schedule Import Manager” on page 1-49, and also “Station
Schedules import/export notes” on page 2-27.
niagaraDriver-ServerConnectionsSummary
The Server Connections Summary is the default view of the ServerConnections slot in the
Niagara Fox Service, under the NiagaraNetwork. It provides a table listing current client connections
to the station’s Fox server (station-to-station connections are not included).
Note: The main usage of this view is to perform a Force Disconnect action (right-click access) on any Fox
server session shown. In some station maintenance scenarios, this may be helpful.
Included in connections summary table are the following columns:
• Address
IP address of the Fox client connected to the station, along with its remote TCP port.
• User
Station user account used for authentication.
• Connection Time
Timestamp of when the Fox connection occurred.
• Application
Client software used to access the station (for example, Niagara Workbench 3.0.76).
From the table, to see more details on any Fox server session, double-click an entry. The view changes to
show the property sheet of that SessionN, with status slots of historical information, including connect
and disconnect entries.
Access the Server Connections Summary from the property sheet of the NiagaraNetwork. Expand
the Fox Service, then click the Server Connections slot.
niagaraDriver-StationManager
The Station Manager view on the NiagaraNetwork allows you to manage and access NiagaraS-
tations. For more details, see “Niagara Station Manager notes” on page 2-8.
niagaraDriver-UserSyncManager
The User Sync Manager view on the NiagaraNetwork allows you to manage the Users device
extension properties for all NiagaraStations in the network. For more details, see “About the User
Sync Manager” on page 2-17.
NiagaraAX-3.7u1
4-3
Drivers Guide
Plugins in niagaraVirtual module Chapter 4 – Plugin Guides
June 5, 2013
NiagaraAX-3.7u1
4-4
Drivers Guide
CHAPTER 5
Component Guides
These Component Guides provides summary information on components commonly used in drivers.
NiagaraAX-3.7u1
5–1
Drivers Guide
Components in driver module Chapter 5 – Component Guides
June 5, 2013
driver-DelimitedFileImport
DelimitedFileImport is a history file import descriptor used to create a Niagara history based upon
data in a (local) delimited text file, such as comma-separated-values (CSV) or tab-delimited values
(delimiter to use is configurable). These import descriptors reside under the HistoryNetworkExt
(Histories extension) of a FileDevice in a FileNetwork. You use the Delimited File Import Manager
view of the Histories extension to add history file import descriptors.
Note: This import descriptor is similar to the ExcelCsvFileImport descriptor, but uses a “dumb” string tokenizer
to parse each line of the specified file, and separates table columns based solely on the specified delimiter.
Only “non-complex” CSV files should be imported using it, or any “non-CSV” delimited file (such as tab-
delimited, for example). For any CSV file created by Microsoft Excel, use the ExcelCsvFileImport descriptor
instead.
This import descriptor has properties “common” among all history import descriptors, such as Name,
History Id, and so on. See “History Import Manager” on page 1-46. For other configuration properties,
see “Properties of history file import descriptors” on page 5-2.
driver-DriverContainer
DriverContainer is used by convention to store all DeviceNetworks in a station database. The
DriverManager is its primary view. See “About the Driver Manager” on page 1-4.
driver-ExcelCsvFileImport
ExcelCsvFileImport is a history file import descriptor used to create a Niagara history based upon
data in any (local) comma-separated-values (CSV) text file created by Microsoft Excel. These history
file import descriptors reside under the HistoryNetworkExt (Histories extension) of a FileDevice in a
FileNetwork. You use the DelimitedFileImportManager view of the Histories extension to add history file
import descriptors.
Note: This import descriptor is similar to the DelimitedFileImport descriptor, but assumes CSV data specifically
created by Microsoft Excel (it lacks the “Delimiter” property). This allows complex CSV-delimited data to
be successfully imported, using the special rules of Excel CSV generated files. For any other type of delimited
data (for example, tab-delimited or “pipe-delimited”), use the DelimitedFileImport descriptor instead.
This import descriptor has properties “common” among all history import descriptors, such as Name,
History Id, and so on. See “History Import Manager” on page 1-46. See the next section “Properties of
history file import descriptors” for other configuration properties.
Properties of history file import descriptors History file import descriptors (DelimitedFileImport,
ExcelCsvFileImport) have the following set of configuration properties, which appear in the New and
Edit dialogs for the descriptors:
• Value Facets
Lets you specify the units with which to display values imported from the delimited file. On the im-
port descriptor’s property sheet, this is property is found under “Config Overrides”.
• Time Zone
Lets you specify the time zone for the imported history. On the import descriptor’s property sheet,
this is property is found under “Config Overrides”.
• File
(Important) Identifies the local delimited file to import, using standard “ord” file syntax. Typically,
you simply click the folder icon to open the File Chooser, and navigate as needed to click-select
an absolute file ord path to the file.
Of, if the parent FileDevice has a non-null “Base Ord” property (specifying a directory), you can type
in a file name or file path relative to that directory, using the following ord syntax:
file:fileName or filePath/toFileName
• Full Import On Execute
Default is disabled. If set to enabled, the entire history is re-filled afresh upon each import. If left at
default, only new data is appended to the history upon every import.
• Row Start
Along with Row End, lets you specify the start and end rows (lines) in the file to use for importing.
Note that both properties are zero-based, meaning that a Row Start of 1 means that it skips the first
line (usually column headers), and begins importing on the second line.
• Row End
See Row Start, above. Row End is optional, and defaults to go to the end of the file (none).
• Delimiter
(appears only if a DelimitedFileImport descriptor) Specifies the text character used in the file to sep-
arate columns. For any ExcelCsvFileImport descriptor, a comma (“,”) is used by default.
NiagaraAX-3.7u1
5–2
Drivers Guide
Chapter 5 – Component Guides Components in driver module
June 5, 2013
NiagaraAX-3.7u1
5–3
Drivers Guide
Components in driver module Chapter 5 – Component Guides
June 5, 2013
driver-FileHistoryWorker
FileHistoryWorker manages the queue and thread processing for history file import descriptors. It
is a frozen slot of the FileDevice, and requires no configuration.
driver-FileNetwork
FileNetwork is available in the driver module, and acts as the “network level” container for one or
more FileDevice components. The FileDeviceManager is its primary view.
The purpose of a FileNetwork is to import data from local delimited-type files (such as CSV), as Niagara
histories. Often, a FileNetwork contains only a single FileDevice. Unlike in true “field bus” networks, the
standard NiagaraAX driver architecture (network: device: device extensions) provides no “real hardware”
equivalency, this is simply a modeling “convention.”
Figure 5-1 shows the architecture of a FileNetwork in the Nav tree, where the active (and most used) view
is the DelimitedFileImportManager view of the Histories extension (FileHistoryDeviceExt) of the
FileDevice.
Note that a FileDevice has none of the other “standard” device extensions (Points, Schedules, or
Alarms). See “Usage notes” for more details on a File Network.
Usage notes The following notes apply to using a FileNetwork.
• The FileNetwork requires special licensing in the host’s Tridium license—requiring the feature
“fileDriver”. This feature may also contain a “history.limit=n” attribute, which defines the
number of files that can be imported as histories (number of history file import descriptors). With-
out proper licensing, the FileNetwork and child components will have a fault status, and files will
not import as Niagara histories.
• Specific requirements must be met by each delimited text file for successful import, namely:
• It must have a single “timestamp” column that contains both date and time. Note this means if
a delimited file has two columns: one for date and another for time, you must perform external
“upstream” processing (outside of Niagara) to combine into a single column, before importing.
• It must have a single column that contains the “value” required in the imported history. Note
that when configuring the history file import descriptor, you specify the “Value Format” as one
of several types (Numeric, String, Boolean). “Value Facets” are also available in the descriptor.
• Optionally, it may also have a single column to use as “Status” in the history—however, this col-
umn must contain integer data (only) with values enumerated based upon Niagara “status bits”
values. See “Properties of history file import descriptors” on page 5-2 for related details.
Note the status import feature may be used mainly with data that first originated from a Niagara
history (that was exported to CSV), then subsequently “manipulated” outside of Niagara.
NiagaraAX-3.7u1
5–4
Drivers Guide
Chapter 5 – Component Guides Components in fox module
June 5, 2013
• Typically, FileNetwork usage applies more to a Supervisor (PC) host versus a JACE host, as it is more
likely to contain text-delimited files needed for import as Niagara histories. In particular, it may be
used with a Supervisor serving “Energy Services” reports to remote clients.
driver-HistoryNetworkExt
HistoryNetworkExt (History Policies) provides two network-level functions for histories, as follows:
•For history imports, it contains an “On Demand Poll Scheduler”, which has configurable poll rates
used in “on demand polling” of imported histories, if so configured.
• For history exports, it acts a container for one or more “config rules”, which specify how the config-
uration of the local history (archive) is set when the history is “pushed” (exported) into the station.
Configuration rules are applied only when the history is created, that is, when first archived to the
station. Changing a rule has no effect on existing histories.
For more details, see “About History Policies” on page 2-6.
driver-HistoryPollScheduler
HistoryPollScheduler (On Demand Poll Scheduler) contains properties that specify the
available network-level polling rates for imported histories, used in “on demand” polling of an
imported history. Such polling can be triggered when the “Live Updates” button in a history chart or
history table view is toggled active. For more details, see “Default history policies” on page 2-6.
driver-PingMonitor
The PingMonitor periodically calls “ping” on all the “pingables” to monitor network and device
health. PingMonitor provides built-in support to generate alarms when pingables are down. The
PingMonitor is available in the property sheet of most networks as “Monitor.” See “About Monitor” on
page 1-7 for more details.
driver-SendTimer
This class file provides support for max and min send time on components implementing the
TimedWrites interface. The sendTimer will identify a SendTimes object in the parent path. The first
SendTimes encounter walking up from the immediate parent will be used. If no SendTimes object is
found then calls to newChange() will result in an single immediate call to sendWrite() on the parent. The
SendTimer is available in the driver Module.
driver-SendTimes
This class file provides the means to configure max and min send times for components imple-
menting the TimedWrites interface and containing SendTimer object. The SendTimes is available in
the driver Module.
driver-TuningPolicy
Contains a collection of properties typically used in the driver network’s evaluation of both write
requests (e.g. to writable proxy points) as well as the acceptable “freshness” of read requests. Also, can
associate with one of 3 Poll Rates in the network’s Poll Service.
You can create multiple TuningPolicies under a driver’s TuningPolicyMap. You can then assign one or
more proxy points to a specific TuningPolicy. See“About Tuning Policies” on page 1-8 for more details.
driver-TuningPolicyMap
Container for one or more TuningPolicy(ies), found in the property sheet of most network compo-
nents. You can create multiple TuningPolicies under a network’s TuningPolicyMap. You can then
assign one or more proxy points to a specific TuningPolicy. See“About Tuning Policies” on page 1-8 for
more details.
NiagaraAX-3.7u1
5–5
Drivers Guide
Components in niagaraDriver module Chapter 5 – Component Guides
June 5, 2013
Actions You can manually force/test a connection using Manual Connect and Manual
Disconnect actions.
fox-FoxFileSpace
FoxFileSpace maps "File:" remotely.
fox-FoxServerConnection
FoxServerConnection encapsulates a Fox (client) session connection to the station’s Fox server. It
represents user access (vs. access from another station). Child slots provide the connection state as well
as various historical data about last connect and disconnect activity.
Actions Available Action is Force Disconnect.
fox-FoxService
FoxService is the Baja component wrapper for the FoxServer daemon. It is commonly used within
a NiagaraNetwork, but can be used stand alone to provide basic Fox accessibility.
fox-FoxSession
FoxSession represents a Workbench (client) session to a station running on a NiagaraAX host
(server), using a Fox connection on a particular port—in other words, a station connection. You see
this as an expandable icon in Workbench’s Nav tree, following by the station’s (name), for any station
opened in Workbench.
Starting in AX-3.7, a secure Fox station connection (SSL or TLS) is also possible from Workbench, in
which case the station connection icon shows a small padlock, like the above.
Also starting in AX-3.7, Workbench provides a right-click Session Info command on any active
station connection, which produces a popup dialog with details on the current Fox station connection.
This is in addition to other right-click commands previously available, such as Disconnect, Close, Spy,
Save Station, Backup Station, and so on.
fox-ServerConnections
ServerConnections is the container slot to store the current server-side FoxServerConnections from
non-station clients. The primary view is the ServerConnectionsSummary. The ServerConnections is
available in the fox module.
NiagaraAX-3.7u1
5–6
Drivers Guide
Chapter 5 – Component Guides Components in niagaraDriver module
June 5, 2013
• RoleManager
• SyncTask
niagaraDriver-BogProvider
BogProvider (“Sys Def Provider”) is child container property in a NiagaraNetwork. The API
interacts with this component to query about the “Sys Def ” hierarchy, and persists this definition.
Storage includes two options to store all Sys Def nodes: either BOG, i.e. the station .bog, as the default
(using child ProviderStation components), or Orion (in Orion database). Sys Def is chiefly of interest to
NiagaraAX developers working with the API.
niagaraDriver-CyclicThreadPoolWorker
The CyclicThreadPoolWorker (Workers) slot of a NiagaraNetwork allows “shared thread pool
size” tuning for large networks via a Max Threads property. By default, the value of this property is
50. If necessary, this can be adjusted. See “NiagaraNetwork component notes” on page 2-1 for further
details.
niagaraDriver-LocalSysDefStation
LocalSysDefStation (“Local Station”) is child container property in a NiagaraStation. It reflects
common “Sys Def ” properties for the local station, which would be “sync’ed up” to a remote Super-
visor station (if the local station was defined as its subordinate). Sys Def is chiefly of interest to developers
extending the API.
niagaraDriver-NiagaraAlarmDeviceExt
A NiagaraStation’s NiagaraAlarmDeviceExt (Alarms extension) specifies how alarms from that
station are mapped into the current station’s own alarm subsystem, plus provide status properties
related to alarm sharing. For more details, see “About the Alarms extension” on page 1-35.
niagaraDriver-NiagaraFileDeviceExt
A NiagaraFileDeviceExt (“Files”) is included among the collection of device extensions under any
NiagaraStation component. This device extension allows the station to import files, and folders of
files, from the associated remote station. The default view is the Niagara File Manager, in which you
create NiagaraFileImport descriptors, to discover, add, and edit parameters specifying which files and
folders are to be imported. For more details, “About the Files extension” on page 2-53.
niagaraDriver-NiagaraFileImport
NiagaraFileImport is an “import descriptor” that specifies a file or folder of files (including all
subfolders and files) that are to be imported to the local station, sourced from the remote parent
NiagaraStation. Included are properties for execution time and overwrite policies. For more details,
“About Niagara FileImport properties” on page 2-55.
niagaraDriver-NiagaraFoxService
NiagaraFoxService (Fox Service) is a container slot of a NiagaraNetwork, and is a specialization of
FoxService that knows how to map server connections to the NiagaraStation serverConnection slot.
NiagaraFoxService typically includes ServerConnections. For more details, see “About the Fox Service”
on page 2-2.
niagaraDriver-NiagaraHistoryDeviceExt
NiagaraHistoryDeviceExt is the Niagara implementation of HistoryDeviceExt. For more details, see
“About the Histories extension” on page 1-34 and “About Histories extension views” on page 1-46.
niagaraDriver-NiagaraHistoryExport
NiagaraHistoryExport defines the export parameters for a local Niagara history, including
collection (“push”) times, current status, and remote history Id. NiagaraHistoryExports reside under
the History extension of a NiagaraStation in the NiagaraNetwork. For more details, see “Niagara History
Export properties” on page 2-34.
niagaraDriver-NiagaraHistoryImport
NiagaraHistoryImport defines the local import parameters for a remote Niagara history, including
collection (“pull”) times, current status, local history Id, and config overrides. NiagaraHistoryIm-
ports reside under the History extension of a NiagaraStation in the NiagaraNetwork. For more details,
see “Niagara History Import properties” on page 2-30.
niagaraDriver-NiagaraNetwork
NiagaraNetwork models NiagaraAX devices (NiagaraStations) that the current station can commu-
nicate with. The primary view is the StationManager. The NiagaraNetwork includes the NiagaraFox-
Service. For more details, see “About the Niagara Network” on page 2-1.
NiagaraAX-3.7u1
5–7
Drivers Guide
Components in niagaraDriver module Chapter 5 – Component Guides
June 5, 2013
niagaraDriver-NiagaraPointDeviceExt
NiagaraPointDeviceExt is the Niagara implementation of PointDeviceExt. The primary view is the
PointManager. For general information, see “About the Points extension” on page 1-27, and for
specific details see “About the Bql Query Builder” on page 2-20 and “Niagara proxy point notes” on page
2-23.
niagaraDriver-NiagaraPointFolder
NiagaraPointFolder is the Niagara implementation of a folder under a NiagaraStation’s Points
extension. You add such folders using the New Folder button in the PointManager view of the Points
extension. Each NiagaraPointFolder has its own PointManager view.
niagaraDriver-NiagaraProxyExt
NiagaraProxyExt is the Niagara implementation of BProxyExt. For more details see “About proxy
points” on page 1-27 and “Niagara proxy point notes” on page 2-23.
niagaraDriver-NiagaraScheduleDeviceExt
NiagaraScheduleDeviceExt is the Niagara implementation of a Schedules device extension. For
general information, see “About the Schedules extension” on page 1-36, and for more specific details
see “Station Schedules import/export notes” on page 2-27.
niagaraDriver-NiagaraScheduleImportExt
NiagaraScheduleImport defines the local import parameters for a remote Niagara schedule. For
more details, see “Station Schedules import/export notes” on page 2-27.
niagaraDriver-NiagaraStation
NiagaraStation models a platform running a Niagara station via a “regular” (unsecured) Fox
connection, including a a JACE, SoftJACE, or Supervisor.
NiagaraStation (with padlock on icon) models a platform running a Niagara station via a secure (SSL
or TLS) Foxs connection, including a including a JACE, SoftJACE, or Supervisor. Applies only if the
remote host is running AX-3.7 or later, is licensed for SSL (feature “crypto”), and is configured for Foxs.
In either case, the NiagaraStation name must map to Station.stationName. For more details, see “Niaga-
raStation component notes” on page 2-13.
For details related to a Foxs connection, see “Discovery notes when stations use secure Fox (Foxs)” on
page 2-9, and also “About the Fox Service” on page 2-2.
Ping Perform a ping test of the device.
niagaraDriver-NiagaraStationFolder
NiagaraStationFolder is the Niagara implementation of a folder under a NiagaraNetwork. You add
such folders using the New Folder button in the StationManager view of the Niagara network. Each
NiagaraStationFolder has its own StationManager view. Station folders can be useful in very large
systems to organize NiagaraStation components.
niagaraDriver-NiagaraSysDefDeviceExt
NiagaraSysDefDeviceExt (Sys Def) is included among the collection of device extensions under
any NiagaraStation component, and is chiefly of interest to developers. This Sys Def extension has
two child container properties, “RoleManager” and “SyncTask”, each with a collection of related
properties. Together with the NiagaraNetwork’s “Sys Def Provider” (BogProvider) component, the API
helps define the organization of stations in a NiagaraNetwork in a known hierarchy, and allows synchro-
nization of basic information up that hierarchy.
For more details, “About Sys Def components” on page 2-50.
niagaraDriver-NiagaraSystemHistoryExport
NiagaraSystemHistoryExport defines the “system tags” text patterns used to export local Niagara
histories into the target NiagaraStation, in addition to other export parameters including collection
(“push”) times and current status. Along with NiagaraHistoryExport descriptors, NiagaraSystemHisto-
ryExport descriptors reside under the History extension of a NiagaraStation in the NiagaraNetwork.
However, SystemHistoryExport descriptors utilize “System Tags” properties of local history extensions,
instead of unique history IDs.
For more details, see “Niagara History Export properties” on page 2-34 and “Using System Tags to export
Niagara histories” on page 2-36.
NiagaraAX-3.7u1
5–8
Drivers Guide
Chapter 5 – Component Guides Components in niagaraVirtual module
June 5, 2013
niagaraDriver-NiagaraSystemHistoryImport
NiagaraSystemHistoryImport defines the “system tags” text patterns used to import remote
Niagara histories from the target NiagaraStation, in addition to local import parameters, including
collection (“pull”) times, current status, local history Id, and config overrides. Along with NiagaraHisto-
ryImport descriptors, NiagaraSystem HistoryImport descriptors reside under the History extension of a
NiagaraStation in the NiagaraNetwork. However, SystemHistoryImport descriptors utilize “System
Tags” properties of remote history extensions, instead of unique history IDs.
For more details, see “Niagara History Import properties” on page 2-30 and “Using System Tags to import
Niagara histories” on page 2-31.
niagaraDriver-NiagaraTuningPolicy
Contains properties used in the NiagaraNetwork’s handling of both write requests (e.g. to writable
proxy points) as well as the acceptable “freshness” of read requests of Niagara proxy points.
You can create multiple tuning policies under the NiagaraNetwork’s NiagaraTuningPolicyMap. You can
then assign one or more proxy points to a specific policy. See“About Tuning Policies” on page 1-8 for
general information, and “Niagara Tuning Policy notes” on page 2-2 for specific details.
niagaraDriver-NiagaraTuningPolicyMap
Container for one or more NiagaraTuningPolicy(ies), found in the NiagaraNetwork’s property sheet.
If needed, you can create multiple tuning policies. You can then assign one or more Niagara proxy
points to a specific policy.
niagaraDriver-NiagaraUserDeviceExt
The NiagaraUserDeviceExt (Users) container of a NiagaraStation holds properties that enable/
configure network user synchronization “in” and “out” of this station, in relation to the station with
its NiagaraNetwork. There is no special view (apart from property sheet) on this Users device extension,
nor is it a container for other components.
For more details, see “About the Users extension” on page 2-14.
niagaraDriver-NiagaraVirtualDeviceExt
The NiagaraVirtualDeviceExt (Virtual) is the Niagara driver implementation of the Baja virtual
gateway in a AX-3.7 or later station. A virtual gateway is a component that resides under the station’s
component space (Config), and acts as a gateway to the station’s “virtual component space.” Note other
object spaces are Files and History. For a general explanation about Baja virtual components, refer to
“About virtual component spaces” on page 1-24.
For details on the Niagara Virtual Gateway, see “About the Niagara Virtual Device Ext” on page 2-39.
niagaraDriver-ProviderStation
ProviderStation (“stationName”) is a child container under the BogProvider (“Sys Def Provider”)
container in a NiagaraStation. Each reflects common “Sys Def ” properties for the named station,
which have been “sync’ed up” from remote subordinate stations (note that one ProviderStation mirrors
the LocalSysDefStation). Note these components may be hidden, by default, as well as some of their child
properties. Sys Def is chiefly of interest to developers extending the API.
niagaraDriver-RoleManager
RoleManager is child container under a NiagaraStation’s “Sys Def” device extension (NiagaraSys-
DefDeviceExt) that specifies and synchronize the relationship of the remote station to the local
station. The “Sys Def ” feature, introduced in AX-3.5, is chiefly of interest to developers extending the API.
niagaraDriver-SyncTask
SyncTask is child container under a NiagaraStation’s “Sys Def” device extension (NiagaraSys-
DefDeviceExt) that propagates changes to Sys Def components between stations that are in a
“master/subordinate” relationship. The “Sys Def ” feature, introduced in AX-3.5, is chiefly of interest to
developers extending the API.
NiagaraAX-3.7u1
5–9
Drivers Guide
Components in niagaraVirtual module Chapter 5 – Component Guides
June 5, 2013
• NiagaraVirtualEnumPoint
• NiagaraVirtualEnumWritable
• NiagaraVirtualNumericPoint
• NiagaraVirtualNumericWritable
• NiagaraVirtualStringPoint
• NiagaraVirtualStringWritable
niagaraVirtual-NiagaraVirtualBooleanPoint
NiagaraVirtualBooleanPoint is the Niagara driver implementation of a virtual BooleanPoint. They
are among the eight type-specific “Niagara virtual” components. For more details, see “About
Niagara virtual components” on page 2-37.
niagaraVirtual-NiagaraVirtualBooleanWritable
NiagaraVirtualBooleanWritable is the Niagara driver implementation of a virtual BooleanWritable.
They are among the eight type-specific “Niagara virtual” components. For more details, see “About
Niagara virtual components” on page 2-37.
niagaraVirtual-NiagaraVirtualCache
NiagaraVirtualCache (Cache) is the component representation of a NiagaraNetwork’s virtual
component cache. It provides a default Niagara Virtual Cache View to examine persisted cached
data (in file or files specified in the virtual Cache Policy). For more details, see “Niagara virtuals cache
(Virtual Policies)” on page 2-41.
niagaraVirtual-NiagaraVirtualCachePolicy
NiagaraVirtualCachePolicy (Cache Policy) is the container for a NiagaraNetwork’s cache of virtual
components, holding configuration properties as well as a component representing the Cache. For
more details, see “Niagara virtuals cache (Virtual Policies)” on page 2-41.
niagaraVirtual-NiagaraVirtualComponent
NiagaraVirtualComponents (or simply Niagara “virtuals”) are the Niagara driver implementation of
Baja Virtual components. They reside under the NiagaraVirtualDeviceExt (“Virtual” slot) of each
NiagaraStation. Typically, note that Niagara virtuals are possible only in an Supervisor station’s Niagar-
aNetwork (the niagaraDriver feature in the host’s license must have “virtual” attribute set to true). In this
scenario, if the “Virtuals Enabled” property of a NiagaraStation is set to true, the entire component
structure of that station can be dynamically modeled using Niagara virtuals.
Note that the Workbench icon for each Niagara virtual has a small “ghost” ( ) superimposed in the lower
right over the “normal” icon for that type—a visual reminder that you are looking into the “virtual
component space” that represents that station.
For more details, see “About Niagara virtual components” on page 2-37.
niagaraVirtual-NiagaraVirtualDeviceExt
The NiagaraVirtualDeviceExt (Virtual) is the Niagara driver implementation of the Baja virtual
gateway in a AX-3.7 or later station. A virtual gateway is a component that resides under the station’s
component space (Config), and acts as a gateway to the station’s “virtual component space.” Note other
object spaces are Files and History. For a general explanation about Baja virtual components, refer to
“About virtual component spaces” on page 1-24.
For details on this Niagara Virtual gateway, see “About the Niagara Virtual Device Ext” on page 2-39.
niagaraVirtual-NiagaraVirtualEnumPoint
NiagaraVirtualEnumPoint is the Niagara driver implementation of a virtual EnumPoint. They are
among the eight type-specific “Niagara virtual” components. For more details, see “About Niagara
virtual components” on page 2-37.
niagaraVirtual-NiagaraVirtualEnumWritable
NiagaraVirtualEnumWritable is the Niagara driver implementation of a virtual EnumWritable. They
are among the eight type-specific “Niagara virtual” components. For more details, see “About
Niagara virtual components” on page 2-37.
niagaraVirtual-NiagaraVirtualNumericPoint
NiagaraVirtualNumericPoint is the Niagara driver implementation of a virtual NumericPoint. They
are among the eight type-specific “Niagara virtual” components. For more details, see “About
Niagara virtual components” on page 2-37.
NiagaraAX-3.7u1
5–10
Drivers Guide
Chapter 5 – Component Guides Components in serial module
June 5, 2013
niagaraVirtual-NiagaraVirtualNumericWritable
NiagaraVirtualNumericWritable is the Niagara driver implementation of a virtual Numer-
icWritable. They are among the eight type-specific “Niagara virtual” components. For more details,
see “About Niagara virtual components” on page 2-37.
niagaraVirtual-NiagaraVirtualStringPoint
NiagaraVirtualStringPoint is the Niagara driver implementation of a virtual StringPoint. They are
among the eight type-specific “Niagara virtual” components. For more details, see “About Niagara
virtual components” on page 2-37.
niagaraVirtual-NiagaraVirtualStringWritable
NiagaraVirtualStringWritable is the Niagara driver implementation of a virtual StringWritable.
They are among the eight type-specific “Niagara virtual” components. For more details, see “About
Niagara virtual components” on page 2-37.
NiagaraAX-3.7u1
5–11
Drivers Guide
Components in tunnel module Chapter 5 – Component Guides
June 5, 2013
In any station, only one TunnelService is recommended. It can hold the required number of child tunnels,
as needed.
The TunnelService contains the following properties:
• Enabled
Boolean that must be true to support any tunnel server operations.
• Server Port
Software port monitored for incoming client tunnel connections. Default port is 9973.
• Status
TunnelService status, which should be ok (no special licensing required).
• Connections
Number of active tunnel connections, which ranges from 0 (no active) to the number of child tunnel
components.
tunnel-SerialTunnel
. SerialTunnel is the “server side” component used to support tunneling of Windows serial-
connected PC applications to devices reachable in a station’s driver network. Typically, serial
tunneling is used with a “legacy” vendor-specific PC program to access RS-232 connected devices
attached to a JACE controller.
You can add one or more SerialTunnels under a station’s TunnelService. Each SerialTunnel associates
with one specific driver network (and corresponding JACE serial port). See “Serial tunneling” on page 3-
2 for more details.
Properties of the SerialTunnel are described as follows:
• Enabled
Boolean slot that must be enabled (true) to permit tunneling.
• Connections
Read-only slot to indicate the number of tunnel connections, as either 0 (none) or 1 (maximum).
• Status
Read-only status of the serial tunnel, typically ok (unless fault for no supporting COM port).
• Identifier
Read-only “reflection” of the entered Port Name slot in the Serial Port Config container (below),
used as the “Tunnel Name” when configuring the client-side Serial Tunneling dialog.
• Serial Port Config (container)
Holds configuration of the JACE serial port as used by the specific driver network. See SerialHelper
on page 11 for slot descriptions.
In addition, a SerialTunnel has an available (right-click) action:
• Disconnect All
Disconnects any active connection through this SerialTunnel (maximum of 1), causing removal of
the “TunnelConnection” below it. On the remote (serial tunnel client) side, a popup message “Con-
nection closed by remote host” is seen.
Note: Any TunnelConnection component also has its own “Disconnect” action, which effectively
performs the same function.
tunnel-TunnelConnection
. TunnelConnection is a dynamically-added component under a tunnel component (such as a Serial-
Tunnel) that reflects read-only information about this current tunnel connection.
Properties of a TunnelConnection (any tunnel type) are as follows:
• Established
Date-timestamp of when this tunnel connection was first established.
• User Name
User in the station that is currently tunneling.
• Remote Host
Windows hostname or IP address of the remote tunneling client.
• Protocol Version
Version of the (remote) NiagaraAX tunneling client application being used.
• Last Read
Date-timestamp of when the last read of a station item occurred over the tunnel connection.
• Last Write
Date-timestamp of when the last write to a station item occurred over the tunnel connection.
In addition, a TunnelConnection has an available (right-click) action:
NiagaraAX-3.7u1
5–12
Drivers Guide
Chapter 5 – Component Guides Components in tunnel module
June 5, 2013
• Disconnect
Disconnects the active tunnel connection, removing the parent TunnelConnection component.
This causes a popup “Connection closed by remote host” to be seen on the client tunnel side.
Note: A SerialTunnel component also has its own “Disconnect All” action, which effectively performs
the same function.
NiagaraAX-3.7u1
5–13
Drivers Guide
Components in tunnel module Chapter 5 – Component Guides
June 5, 2013
NiagaraAX-3.7u1
5–14
Drivers Guide