Micro Focus Security Arcsight Logger: Configuration and Tuning Best Practices
Micro Focus Security Arcsight Logger: Configuration and Tuning Best Practices
ArcSight Logger
Software Version: 7.2
Legal Notices
Micro Focus
The Lawn
22-30 Old Bath Road
Newbury, Berkshire RG14 1QN
UK
https://www.microfocus.com
Copyright Notice
© Copyright 2021 Micro Focus or one of its affiliates
Confidential computer software. Valid license from Micro Focus required for possession, use or copying. The
information contained herein is subject to change without notice.
The only warranties for Micro Focus products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an additional warranty.
Micro Focus shall not be liable for technical or editorial errors or omissions contained herein.
No portion of this product's documentation may be reproduced or transmitted in any form or by any means, electronic
or mechanical, including photocopying, recording, or information storage and retrieval systems, for any purpose other
than the purchaser's internal use, without the express written permission of Micro Focus.
Notwithstanding anything to the contrary in your license agreement for Micro Focus ArcSight software, you may
reverse engineer and modify certain open source components of the software in accordance with the license terms for
those particular components. See below for the applicable terms.
U.S. Governmental Rights. For purposes of your license to Micro Focus ArcSight software, “commercial computer
software” is defined at FAR 2.101. If acquired by or on behalf of a civilian agency, the U.S. Government acquires this
commercial computer software and/or commercial computer software documentation and other technical data subject
to the terms of the Agreement as specified in 48 C.F.R. 12.212 (Computer Software) and 12.211 (Technical Data) of the
Federal Acquisition Regulation (“FAR”) and its successors. If acquired by or on behalf of any agency within the
Department of Defense (“DOD”), the U.S. Government acquires this commercial computer software and/or
commercial computer software documentation subject to the terms of the Agreement as specified in 48 C.F.R.
227.7202-3 of the DOD FAR Supplement (“DFARS”) and its successors. This U.S. Government Rights Section 18.11 is in
lieu of, and supersedes, any other FAR, DFARS, or other clause or provision that addresses government rights in
computer software or technical data.
Trademark Notices
Adobe™ is a trademark of Adobe Systems Incorporated.
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.
Documentation Updates
The title page of this document contains the following identifying information:
l Software Version number
l Document Release Date, which changes each time the document is updated
l Software Release Date, which indicates the release date of this version of the software
To check for recent updates or to verify that you are using the most recent edition of a document, go to:
https://www.microfocus.com/support-and-services/documentation
Support
Contact Information
Phone A list of phone numbers is available on the Technical Support
Page: https://softwaresupport.softwaregrp.com/support-contact-
information
About Logger
Logger is a log management solution that is optimized for extremely high event throughput,
efficient long-term storage, and rapid data analysis. Logger receives and stores events;
supports search, retrieval, and reporting; and can optionally forward selected events.
Logger is built for fast event insertion and forwarding, and high performance search and
analysis. However, when these activities occur simultaneously, Logger components compete for
resources and can affect Logger’s performance. Other factors that affect Logger performance
include the network environment, the complexity of the functions you are performing, the
Logger type, and how you have Logger configured.
Many factors can affect Logger’s search speed and scan rate, as well. Factors include, among
other things, the size of the data set to be searched, the complexity of the query, and whether
the search is distributed across peers.
When deploying and configuring Logger or troubleshooting it to achieve optimum
performance, follow the guidelines discussed in this guide. If you need additional guidance,
contact Micro Focus ArcSight Customer Support.
Introduction Page 4 of 44
Chapter 1: Input and Output Components
The following sections discuss factors to consider and provide guidelines for configuring Logger
input and output components.
Web Connections
Prior to 7.0 version, Loggers were limited to only 250 connections. Now, Logger supports up to
1000 simultaneous HTTPS connections. These connections can be from Web browsers
connecting to the Logger Web UI, from connectors to SmartMessage receivers configured on
the Logger, from API clients, and from peer Loggers.
To review the connections (including peer searches and SmartConnectors), run the following
Linux command:
netstat –atlnp | grep <port>
The established connections are the ones we are interested in. To get a count of the established
connections to your Logger, run the following Linux command:
netstat -ntlap | grep 443 | grep httpd | grep ESTABLISHED | wc -l
Apache supports up to 1000 simultaneous connections by default (as defined by the MaxClients
value in the httpd.conf file). Logger’s HTTPS connections are a subset of these. To determine
the number of Apache processes currently running on your system, run the following Linux
command:
ps aux |grep httpd
Connectors
Connectors can send events to Logger varying greatly in their peak throughput. Simple
connectors sending smaller events will have a higher throughput than more complex
connectors sending larger events.
Logger supports more than 4000 simultaneous HTTPS connections. If you have a large number
of connectors connecting individually to SmartMessage receivers on Logger, consider
aggregating connectors.
Receivers
There is no limitation on the number or type of receivers, or its maximum throughput.
However, adding more than 40 to 50 receivers may affect performance. A high incoming event
rate and large event size can affect the performance of a receiver. The recommended
maximum total events per second (EPS) incoming rate is 15K. The connectors that send events
to the Logger may have limits on their throughput.
To monitor the event flow on each receiver through alerts and for better granularity in
searches, use an individual receiver for each connector sending events to Logger. Do not reuse
deleted receiver names. Logger retrieves old device information when conducting a device
search.
As with other considerations related to scope, questions about how to configure receivers and
how many connectors should send data to any given receiver, are best answered when taking
the entire environment into consideration (data type, usage requirements, and so on). The
Micro Focus ArcSight Professional Services team can do full scoping of such scenarios. Contact
your local Micro Focus ArcSight Sales representative for more details.
Devices
A device is a named event source, comprising of an IP address or hostname of the event sender
and the name of the receiver that receives the event. Therefore, a host or connector that sends
events to two different receivers on the same Logger is recognized as two different devices.
Receivers Page 6 of 44
Configuration and Tuning Best Practices
Chapter 1: Input and Output Components
Device Groups
Device groups classify events received from various devices. For example, device A and device
B events could be stored in Device Group AB and device C events could be stored in Device
Group C. There is no limit on the number of device groups on a Logger.
You can write storage rules that direct events from specific device groups to storage groups.
Also, you can include device groups in queries to limit the data set that Logger must scan, thus
resulting in faster searches.
Forwarders
Forwarders send events received on Logger to specific destinations such as ESM, other
connectors, or other Loggers. Logger uses its onboard connector when forwarding events to
ESM. You can forward all events, use out-of-box filters, or write queries to forward only
specific events. You can forward events continuously in real-time or only forward events for a
specified time range.
The rate at which a forwarder send events depends on a number of factors including the
number of forwarders, the size of the events, and the complexity of the query used to filter the
events. Larger events and more complex queries can lower the events per second (EPS) out
rate.
A Logger without filters can forward from 10K to 16K EPS, depending upon the forwarder type.
TCP Forwarders have higher EPS rates than Forwarders using UDP or the onboard connector.
Forwarding CEF events from Logger to a Syslog destination provides better throughput than
forwarding to other destinations.
When filtering events for forwarding, Logger must evaluate each event against the query to
determine whether to forward it to the destination. This slows down the forwarding rate. The
more complex the query is, the slower the forwarding rate. (A complex query typically includes
a number of Boolean expressions, such as a regular expression with multiple OR operators.)
Caution: Do not add more than five forwarders as it may reduce the Logger´s performance.
Instead, add another Logger to distribute the forwarding load. Forwarders have to compete
for the same Logger resources and onboard connector in high EPS situations or where other
resource- intensive features are running in parallel (alerts, reports, and several search
operations) with a complex forwarding filter.
l Ensure that the forwarder’s destination can keep up with the forwarded events. Otherwise,
add another forwarding destination.
l Adding a second logical ESM destination to increase the outbound EPS limit may affect
Logger´s performance.
l To increase the forwarding rate to an ESM destination, create a secondary ESM destination
with a secondary forwarder.
l To increase the outbound EPS limit for forwarders with filters, move the filtering operation
from the Logger forwarder to the source connectors and devices. Doing so removes the need
to filter events, and you can then forward all events.
l Avoid forwarding events across a wide-area network (WAN).
Forwarders Page 8 of 44
Configuration and Tuning Best Practices
Chapter 1: Input and Output Components
l Use one forwarder and apply a filter-out filter on the connector resource in ESM to exclude
data that you do not want to forward.
l While adding additional forwarders can increase EPS throughout to ESM, configure only one
ESM destination for each ESM server. Each additional ESM destination shares memory with
all configured ESM destinations, which can cause contention and potential connector failure
if oversubscribed. As a workaround, you can increase the Logger on-board connector from
256MB to 512MB from the ESM console, or logically separate the events into several active
channels once they arrive to the ESM.
l When separating incoming filtered events on ESM, use Active Channel filters instead of
creating multiple Active Channels on multiple incoming Logger connectors.
l Separate the events from the source connectors into two streams, each one of them going to
a dedicated receiver. Use one stream for the events that need to be forwarded to ESM and
the other stream for the events that do not need to be forwarded. Then, define a filter
condition on the device or device group receiving the events from the first stream. Doing so
enables you to configure an efficient filter condition.
l To forward events from Logger to ESM, use a Syslog connector to send events to Logger. If a
different method such as Netcat is used, the events are forwarded to Logger but not to ESM.
Forwarders Page 9 of 44
Chapter 2: Storage Components
The following sections provide guidelines for configuring Logger storage components.
Storage Volume
Storage volume defines Logger's primary storage space. Although you can increase the size of
an initially defined storage volume, follow these guidelines for optimal use of available storage
space and expected performance.
Micro Focus ArcSight recommends to use NFS for archive storage. Using NFS as primary
storage may cause sub-optimal performance and reliability.
l You can increase the size of a Storage Volume, but you cannot decrease it. Each Logger
model has a maximum allowed Storage Volume size.
l On a SAN Logger appliance, make sure that you allocate the maximum size logical unit
number (LUN) during initial Logger setup. Logger cannot detect a resized LUN. Therefore, if
you change the LUN size after it has been mounted on a Logger, Logger may not recognize
the new size.
Storage Groups
Storage groups enable you to implement different retention policies. Therefore, data stored in
one storage group can be held for longer or shorter time than another group.
Note: The names of the Internal Storage Group and Default Storage Group cannot be modified.
User-created storage groups can be renamed if necessary.
In many cases, storage group retention policies are dictated by compliance requirements, such
as PCI. However, such requirements might not be met if the storage groups fill up, because the
oldest events could be purged automatically to make room for incoming events, even if they
are still within the retention period. Even though you set the Default Storage group to 365 days
retention, you should not simply assume that all the data will still be there on day 365 in a
growing environment. As your environment grows, it is important re-scope your requirements
for receivers, forwarders, retention policies, number of Loggers, and so on, accordingly. The
Micro Focus ArcSight Professional Services team can do full scoping of such scenarios. Contact
your sales representative for more details.
On the other hand, set the retention policy period for a date beyond the maximum live data
age.
Tip: Configure alerts to notify the appropriate users when the Storage Group usage gets too high
and defragment the database at those times. Additionally review your archive setup and
retention policy, and confirm that it is set up correctly. For more information, see "Notifications"
on page 12, "Disk Space and Database Fragmentation" on page 17, and the Logger Administrator’s
Guide.
Storage Rules
Storage rules direct events from specified device groups to specific storage groups. Use
storage rules to direct events to the correct storage group. You could set up up to 40 storage
rules to store events in storage groups with different retention periods.
A storage group will no longer save events after it has reached the maximum capacity. Despite
the over limit, the storage rules will continue sending events to the storage group. To check the
allocated and used space, go to Configuration > Storage Groups.
Tip: . Your system uses the Simple Mail Transfer Protocol (SMTP) and you can choose to set
authentication and TLS to send email notifications such as alerts and password reset emails.
The following sections discuss factors to consider and provide guidelines for configuring alerts
on Logger.
Real-time Alerts
Alerts are triggered in real time. That is, when a specified number of matches occur within the
specified threshold, an alert is immediately generated. Although any number of real-time
alerts can be defined, a maximum of 25 real-time alerts can be enabled at one time. To enable
an additional alert, you will need to disable a currently enabled alert.
Note: If you have the maximum number of alerts enabled, and the receiver EPS is higher than
30k, you may see some slow-down in the receiver EPS to prevent slower search times.
You can use preconfigured filters to specify event patterns when creating alerts.
Tip: Save a copy of a preconfigured filter and edit the copy to meet your business needs (or just
write your own.) Refer to the Logger Administrator’s Guide for more information.
The particular filters available depend on your Logger version and model, but may include:
l System Alert - Disk Space Below 10% (CEF format)
l System Alert - Root Partition Free Space Below 10% (CEF format)
l System Alert - Storage Group Usage Above 90% (CEF format)
Use the system filters for real-time alerts to quickly find and handle system or hardware issues.
Create saved-search alerts for other things, such as log source alerts.
Real-time alerts can affect system performance, especially if many other resource-intensive
features are running on Logger in parallel.
To avoid confusing results when the system time zone is set to /US/Pacific-New, set the
system time zone to a specific region, such as /America/Los_Angeles.
Restoring Archives
Events are not copied back to local storage when event archives are loaded. Instead, a pointer
to the archive is activated and it is included in queries.
When loading event archives that have been archived offline, but still have not been affected
by the system's retention policy, Logger searches against the loaded archive instead of the
same data that is local to Logger resulting in a much slower search.
Note: Even though an archive has been created, you cannot load an archive for data that is still in
current storage. Loading the archive will fail if that data has not already passed it's retention date
and been aged out of current storage.
While there is no limit to how many archives can be loaded, the size of the metadata table
increases causing slower queries. If you load a large number of archives, searches on the
regular data may be slower. How much slower, depends on how much data is in the archives
and also on how much regular indexed data is in the system.
Tip: If you have a lot of archive material to restore, a freshly-installed Logger that has had a
Configuration Backup applied may provide the fastest restoration. Remember to attach the same
archive mount names to the new Logger.
Note: Do not confuse disk space usage under the root disk (/) with usage under /opt/data
where events are stored. The area under /opt/data is always 100% full when pre-allocation is
configured during initialization.
As the Logger database expands, more indexing is required and there are more events to scan.
This can result in decreased search speed. To help maintain and improve search speed as your
database grows, defragment the database annually. You should also run a defragmentation if
you observe a slow-down or if you see a message in the UI or in the postgres log that
recommends doing so.
Tip: You can configure alerts to notify the appropriate users when the free space gets too low,
and defragment the database at those times. See "Notifications" on page 12 and the Logger
Administrator's Guide for more information.
hprof Files
On Software Logger and Logger Appliances that have SSH enabled, you can get back some disk
space by removing old hprof files. You can find and remove the xxx_yyy.hprof files from the
/current/arcsight/logger/ directory.
Saved Searches
You can delete custom saved searches as well as old instances of the search output that have
accumulated over time. You can delete published instances of a saved search or alert from the
Configuration | Search > Saved Search Files page. You can delete an unnecessary saved
search or alert itself from the Configuration | Search > Scheduled Searches/Alerts page.
Reports
You can delete custom reports as well as old instances of the report output that have
accumulated over time. Please be certain that you want to remove these old reports, and do so
carefully. You can delete published instances from the List Published Outputs page, accessed
by right-clicking the report in the Report Explorer. You can delete an unnecessary report itself
by using the right-click menu in the Report Explorer.
Important: The Classic Search page has been deprecated. Micro Focus recommends using the
equivalent function on the Search page to conduct your searches instead.
Indexing
When searching for uncommon field values, use superindexing to narrow the range of data
that needs to be searched. To check the indexed fields in your system, open the Configuration
> Search > Default Fields page and look for the fields with a check mark in the Indexed
column. Once a field has been indexed, it cannot be removed it.
To optimize search performance, ensure that you follow these recommendations:
l Enable field-based indexing for all fields that occur in your events. When events are indexed,
Logger can quickly and efficiently search for relevant data. By default, a recommended set
of fields are indexed on your Logger.
l To avoid performance degradation in certain scenarios, index only the fields necessary for
your environment (queries).
l The search performance is impacted when including non- super-indexed fields or field
operators (other than = ) in a search unlikely to retrieve results.
l Allow time between adding a field to the index and using it in the search query. If Logger is
in the process of indexing a field while using that field in a query, the search performance
for that operation will be slower than expected.
l A search will run slower if you add a query with a time range where the currently indexed
field was non-indexed.
For instance, some fields cannot be indexed. The query "<non indexed> CONTAINS
"username"" would slow the search process. Instead, enter the following command, so the
query is not slowed by the non-indexed field:
name = "TCP_MISS" | where <non indexed item> CONTAINS "username"
Even though a search query includes only indexed fields, you might not realize the
performance gain you expect in these situations:
l When you perform search on data in a time range in which a currently indexed field
(included in the query) was non-indexed, the query will run at the speed you would expect if
the field was not indexed. This is because new indexing information is not applied to
previously stored events.
For example, you index the “port” field on August 13th at 2:00 PM. You run a search on
August 14th at 1:00 PM. to find events that include port 80 and occurred between August
11th and August 12th. The “port” field was not indexed between August 11th and the 12th.
As a result, the search defaults to a slower, non-indexed search.
l When a query that includes indexed field is performed on archived events, the query runs
slower than when the data was not archived. This occurs because the index data is not
archived with events. As a result, the search defaults to a slower non-indexed search.
l When you include a field in your search query that Logger is in the process of indexing, the
query will run slowly. This issue is discussed in "High Event Input" on page 24.
For more information on how to write super-indexed field queries optimally, including
examples, consult the Logger Administrator’s Guide.
Tip: The Global Summary includes the date and time of the most recently indexed data. To check
if your index is up to date, compare the Global Summary date and time with the Logger system
time, which is on the same page. The Global summary will display the following format: There
are 22,743 events indexed from 2015/03/12 20:15:01:546 EDT to 2015/03/13
17:05:16:375 EDT . In system time, the timestamp can be found by hovering the upper right-
hand corner of the Logger Summary page. If the two timestamps are nearly equal, then Logger
indexing is working.
To avoid this issue, run a fixed-time search that does not include the last two minutes.
l If this is a recurring problem, make sure that your environment is sized correctly. The Micro
Focus ArcSight Professional Services team can do full scoping of such scenarios. Contact your
local Micro Focus ArcSight Sales representative for more details.
l On the Connector, turn aggregation on to lower the number of duplicate events. This will
also lower the EPS rate.
l Use the Search Analyzer tool to determine if the fields used in your query are indexed. See
the Logger Administrator's Guide for details.
Tip: Including storage groups and peers in search queries is more efficient than including device
groups. Use storage groups and peers in the query as much as possible, to reduce the amount of
data searched.
l Logger recommends to send old events during the lowest EPS ingestion periods. While
sending the old events, use a receiver that is not getting a current time event. Additionally,
send the oldest events to a separate storage group.
l Use metadata such as device group, storage group, and peers instead of Boolean operators
to filter events, where possible.
Tip: Including storage groups and peers in search queries is more efficient than including
device groups. Use storage groups and peers in the query as much as possible, to reduce the
amount of data searched.
l Specify an indexed structured search that reduces the data set size before using a regular
expression query term.
Authentication
For security reasons, Micro Focus ArcSight recommends that you use authorization IDs to
establish peer relationships.
l If the remote Logger is configured for SSL Client authentication (CAC), you must configure an
authorization ID and code on the initiator Logger.
l If user name and password are used for authenticating to a remote peer Logger, the
credentials are only used one time, during the peering relationship set up. After a
relationship has been established, the credentials are not saved (on the Peer Loggers page)
and the peers do not authenticate periodically. Therefore, if the user name or password used
to establish a relationship is changed at a later date or the user name is deleted, peering
relationship is not broken. However, if you delete the peering relationship or it breaks for
other reasons, you will need to enter the updated credentials to re-establish the relationship.
When you run a peer search, initiate queries that do not explicitly use the CEF operator from a
Logger running version 5.2 or later. A query that does not use CEF defined fields will run if the
query is initiated on a Logger running version 5.2 or later. However, if the query is initiated on
a 5.1 or earlier Logger version (before CEF was deprecated), it will fail. For more information,
see "Improving Search Performance" on page 24.
l Peer search speed improvements gained by using search heads apply only to searches run
through the user interface. Using search heads does not improve the speed of scheduled
searches or searches run through Logger Web Services.
l Ensure that the device and storage groups specified in the query exist on all peers. Peers on
which a device or storage group does not exist are skipped.
l Make sure that event fields on ALL peers are indexed for the time range specified in a query.
If an event field is indexed on a local Logger but not on its peers for a specific time range, a
distributed search will run at optimal speed on the local Logger, however will run slower on
the peer Loggers. Therefore, the search performance in such a setup will be slow.
l For peers with different schema, make sure that your searches and reports only involve
fields that have the same name and data type on all peers. Otherwise, the search or report
will fail.
l When peers of mixed Logger versions are involved in the same search, the search features
you can use are determined by capabilities of the peer with the earliest, and therefore most
limited, version.
l Using search heads enables faster peer searches that particularly use aggregation operators,
such as chart, sort, and top. To improve search results, specify in your query all peers to be
searched and exclude Local Logger.
l For slow peer searches caused by a high number of connectors sending high EPS, update the
logger peer events destination to port 8443 or 9000 for software non-root installs.
For details of available capabilities, such as available search operators, refer to the release
notes of the earliest peer Logger.
Tip: When scheduling published reports, Micro Focus ArcSight recommends that you
change the retention period to 1 week after generation. To do this, use the following option
on the Add Report Job page:
Valid Upto <N> <Unit of time> After Generation
l Running large reports can take up a lot of space temporarily. Reports can fail if space is
limited.
Tip: Configure alerts to notify the appropriate users when free space is scarce. For more
information, see "Notifications" on page 12, "Disk Space and Database Fragmentation" on
page 17, and the Logger Administrator’s Guide.
l To run reports with millions of events more efficiently, increase the memory heap size for
the report engine. Take in consideration FastCSV report output format is suggested for large
reports. For more information, see Java Memory Allocation section in Logger Administrator’s
Guide or Contact Support.
Tip: You can increase the memory size based on the memory available in your environment.
Take in consideration the memory allocated for other Logger process before proceeding.
l Specify a scan limit for reports run manually. The default value is 100 000. When you specify
a scan limit, the latest N events are scanned. This results in faster report generation and is
beneficial when you want to process only the latest events in the specified time range
instead of all the events stored in Logger.
l In addition to the search fields, all fields displayed in the report should be indexed unless
local Logger base search report is used. In addition to the fields in the WHERE clause of the
query, the fields in the SELECT clause also need to be indexed. However, some fields cannot
be indexed. The query "<non indexed> CONTAINS "username" would slow the search
process. Instead, enter the following command, so the query is not slowed by the non-
indexed field: name = "TCP_MISS" | where <non indexed item> CONTAINS "username". A list
of the default fields, along with their index status is available on the Default Fields tab
(Configuration > Search > Default Fields).
l Select specific fields, avoid patterns that will return too many hits.
o Use queries like the following:
Select <fieldName1>, <fieldName2> … from events
o Avoid queries like the following:
Select * from events.
o For large reports, add a filter that specifies the fields of interest, such as the following, to
the SQL before the sort and order condition:
select events.arc_sourceAddress, events.arc_destinationAddress,
events.arc_destinationPort,
events.endTime
from events where events.arc_destinationPort >22 and
events.arc_categoryOutcome="/Failure" ;
o Avoid using order by directly on the event table. Use queries like the following:
Select ... from <tableName> where <fieldName> =... order by <fieldName>
l Numeric functions:
abs, ceiling, floor, round, sign, truncate
l Date/time functions:
cast, dayofmonth, hour, minute, month, second, str_to_date, time_to_sec,
unix_timestamp, year
Note: A Logger Appliance with a failed hard drive will display a warning message. Micro
Focus ArcSight strongly recommends that you contact support immediately to get the drive
replaced.
Authentication
If you are using LDAP or RADIUS authentication, Micro Focus ArcSight strongly recommends
configuring a backup LDAP/RADIUS server to help ensure uninterrupted access to Logger.
Interface
To improve the NIC´s performance using more than 1 network interface, enable the
automatically route outbound packets. You can also create an alias for each IP address.
However, you cannot modify its speed.
System Health
To monitor Logger’s health and performance, review the system health events by using Simple
Network Management Protocol (SNMP) or Logger search. For more information, see
"Notifications" on page 12 and the Logger Administrator’s Guide.
License
Logger recommends the user to transition to EPS license before GB Standalone is not longer
supported. For more information, contact your sales representative.
If a license is installed, make sure to restart your Logger after the license update.
Logger cannot longer display GB data in the license usage page if the after installing an EPS
license.
Despite ArcMC management option is enabled in the Admin > Options, Logger will be
managed by ArcMC only if there is communication between Logger and ArcMC .
Restarting
To properly restart the system, use ./loggerd restart all. You can also ./loggerd stop
all and then reboot the system. A reboot of the system without stopping the process may
Restarting Page 39 of 44
Chapter 10: Web Services
The Logger Service Layer exposes Logger functionalities as Web services. By consuming the
exposed Web services, you can integrate Logger functionality in your own applications. Using
the Web service APIs, you can create programs that execute searches on stored Logger events
or run Logger reports, and feed them back to your third-party system.
sum((events.arc_deviceCustomString4)/1048576) as "Total_raw_size_MB",
(sum(events.arc_deviceCustomString4)/sum(events.arc_deviceCustomNumber3)) as
"avg_event_size_bytes"
from events
This daily_data_usage report covered a 24-hour period and got the average raw event size and
total event count and data usage.
sum((events.arc_deviceCustomString4)/1048576) as "Total_raw_size_MB",
(sum(events.arc_deviceCustomString4)/sum(events.arc_deviceCustomNumber3)) as
"avg_event_size_bytes"
from events
group by date