B Perf Tuning Guide
B Perf Tuning Guide
Version 6.1
GC23-9788-01
Version 6.1
GC23-9788-01
Note Before using this information and the product it supports, read the information in Notices on page 59.
This edition applies to Version 6.1 of IBM Tivoli Storage Manager and to all subsequent releases and modifications until otherwise indicated in new editions or technical newsletters. Copyright International Business Machines Corporation 1996, 2009. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . . v
Who should read this guide . . . . Publications . . . . . . . . . Tivoli Storage Manager publications Support information . . . . . . Getting technical training . . . Searching knowledge bases . . . Contacting IBM Software Support . . v . v . v . . . . . vii . . . . . vii . . . . . vii . . . . . ix . . . . . . . . . . . . Server hardware recommendations . . . . . . Database manager for IBM Tivoli Storage Manager Database and log performance . . . . . . Database manager tuning. . . . . . . . Backup performance . . . . . . . . . . Tuning inventory expiration . . . . . . . . Disaster recovery performance . . . . . . . Searching the server activity log . . . . . . Scheduling sessions and processes . . . . . . LAN-free backup . . . . . . . . . . . Maximum number of mount points for a node . Managing storage pools and volumes . . . . Cached disk storage pools . . . . . . . Tuning storage pool migration . . . . . . Improving storage agent performance . . . . Modifying the IBM Tivoli Monitoring environment file for reporting performance . . . . . . . Performance improvement activities for server platforms . . . . . . . . . . . . . . Actions for better performance on all server platforms . . . . . . . . . . . . . AIX server performance . . . . . . . . AIX: vmo and ioo commands . . . . . . UNIX file systems . . . . . . . . . . HP-UX server . . . . . . . . . . . Linux server . . . . . . . . . . . . Sun Solaris server . . . . . . . . . . Windows server . . . . . . . . . . . Estimating throughput in untested environments . Tuning tape drive performance . . . . . . . Using collocation with tape drives . . . . . Tape drive transfer rate . . . . . . . . Tuning disk performance . . . . . . . . . Busses . . . . . . . . . . . . . . . . 8 . 9 . 9 . 10 . 11 . 11 . 12 . 12 . 12 . 12 . 13 . 13 . 13 . 14 . 15 . 15 . 16 . . . . . . . . . . . . . . 17 17 17 19 19 19 19 20 21 21 22 23 24 24
Chapter 1. Overview of IBM Tivoli Storage Manager tuning . . . . . . . . 1 Chapter 2. IBM Tivoli Storage Manager server performance tuning. . . . . . . 3
Tuning server options . . . . . . . . . DBMEMPERCENT . . . . . . . . . DISKSTGPOOLMEMSIZE . . . . . . . EXPINTERVAL . . . . . . . . . . MAXSESSIONS . . . . . . . . . . MOVEBATCHSIZE and MOVESIZETHRESH RESTOREINTERVAL . . . . . . . . TCPNODELAY . . . . . . . . . . TCPWINDOWSIZE . . . . . . . . . TXNGROUPMAX . . . . . . . . .
Copyright IBM Corp. 1996, 2009
. . . . . . . . . .
. . . . . . . . . .
3 3 4 4 5 5 6 6 6 7
iii
Windows client . . . . . . . . . . Client performance recommendations for all platforms . . . . . . . . . . . . . Hierarchical Storage Manager tuning . . . . Data Protection for Domino for z/OS. . . .
. . . .
. 35 . 36 . 37 . 37
| Chapter 4. Administration Center | performance tuning . . . . . . . | Administration Center capacity planning . . Maximum number of active administrators . | Processing capacity . . . . . . . . . | I/O throughput . . . . . . . . . . | Processing memory . . . . . . . . . | Java heap memory size . . . . . . . | | Administration Center setup recommendations Installation requirements . . . . . . . | Locating the Administration Center . . . | Minimizing memory usage . . . . . . | Optimizing Windows Server 2003 memory . | Using the default action . . . . . . . | | Tuning Administration Center performance. . Tuning processor performance . . . . . | Tuning network performance . . . . . | Tuning memory performance . . . . . |
. . 39
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 39 39 40 40 40 42 42 42 43 43 43 44 44 44 44
TCP/IP communication concepts and tuning Sliding window . . . . . . . . . Networks . . . . . . . . . . . . Limiting network traffic . . . . . . . AIX network settings . . . . . . . . MTU and MSS settings . . . . . . NetWare client cache tuning . . . . . . Maximum transmission unit settings . . Sun Solaris network settings . . . . . . z/OS network settings. . . . . . . . USS client with IBM TCP/IP for z/OS . TCP/IP and z/OS UNIX system services performance tuning. . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . .
47 49 49 50 50 51 53 53 54 55 55
. 56
Glossary . . . . . . . . . . . . . . 63 Index . . . . . . . . . . . . . . . 65
iv
Preface
This publication helps you tune the performance of the servers and clients in your IBM Tivoli Storage Manager environment. Before using this publication, you should be familiar with the following areas: v The operating systems on which your IBM Tivoli Storage Manager servers and clients reside v The communication protocols installed on your client and server machines
Publications
Tivoli Storage Manager publications and other related publications are available online. You can search all publications in the Tivoli Storage Manager Information Center: http://publib.boulder.ibm.com/infocenter/tsminfo/v6. You can download PDF versions of publications from the Tivoli Storage Manager Information Center or from the IBM Publications Center at http://www.ibm.com/ shop/publications/order/. You can also order some related publications from the IBM Publications Center Web site. The Web site provides information for ordering publications from countries other than the United States. In the United States, you can order publications by calling 800-879-2755.
Table 1. Tivoli Storage Manager server publications (continued) Publication title IBM Tivoli Storage Manager for Linux Installation Guide IBM Tivoli Storage Manager for Linux Administrators Guide IBM Tivoli Storage Manager for Linux Administrators Reference IBM Tivoli Storage Manager for Sun Solaris Installation Guide IBM Tivoli Storage Manager for Sun Solaris Administrators Guide IBM Tivoli Storage Manager for Sun Solaris Administrators Reference IBM Tivoli Storage Manager for Windows Installation Guide IBM Tivoli Storage Manager for Windows Administrators Guide IBM Tivoli Storage Manager for Windows Administrators Reference IBM Tivoli Storage Manager Server Upgrade Guide Order number GC23-9783 SC23-9771 SC23-9777 GC23-9784 SC23-9772 SC23-9778 GC23-9785 SC23-9773 SC23-9779 SC23-9554
IBM Tivoli Storage Manager for System Backup and Recovery Installation SC32-6543 and Users Guide Table 2. Tivoli Storage Manager storage agent publications Publication title IBM Tivoli Storage Manager for SAN for AIX Storage Agent Users Guide IBM Tivoli Storage Manager for SAN for HP-UX Storage Agent Users Guide IBM Tivoli Storage Manager for SAN for Linux Storage Agent Users Guide IBM Tivoli Storage Manager for SAN for Sun Solaris Storage Agent Users Guide IBM Tivoli Storage Manager for SAN for Windows Storage Agent Users Guide Table 3. Tivoli Storage Manager client publications Publication title IBM Tivoli Storage Manager for UNIX and Linux: Backup-Archive Clients Installation and Users Guide IBM Tivoli Storage Manager for Windows: Backup-Archive Clients Installation and Users Guide Order number SC23-9791 SC23-9792 Order number SC23-9797 SC23-9798 SC23-9799 SC23-9800 SC23-9553
IBM Tivoli Storage Manager for Space Management for UNIX and Linux: SC23-9794 Users Guide IBM Tivoli Storage Manager for HSM for Windows Administration Guide SC23-9795 IBM Tivoli Storage Manager Using the Application Program Interface Program Directory for IBM Tivoli Storage Manager z/OS Edition Backup-Archive Client Program Directory for IBM Tivoli Storage Manager z/OS Edition Application Program Interface SC23-9793 GI11-8912 GI11-8911
vi
Table 4. Tivoli Storage Manager Data Protection publications Publication title Order number
IBM Tivoli Storage Manager for Advanced Copy Services: Data Protection SC33-8331 for Snapshot Devices Installation and Users Guide IBM Tivoli Storage Manager for Databases: Data Protection for Microsoft SQL Server Installation and Users Guide SC32-9059
IBM Tivoli Storage Manager for Databases: Data Protection for Oracle for SC32-9064 UNIX and Linux Installation and Users Guide IBM Tivoli Storage Manager for Databases: Data Protection for Oracle for SC32-9065 Windows Installation and Users Guide IBM Tivoli Storage Manager for Enterprise Resource Planning: Data Protection for SAP Installation and Users Guide for DB2 IBM Tivoli Storage Manager for Enterprise Resource Planning: Data Protection for SAP Installation and Users Guide for Oracle IBM Tivoli Storage Manager for Mail: Data Protection for Lotus Domino for UNIX, Linux, and OS/400 Installation and Users Guide IBM Tivoli Storage Manager for Mail: Data Protection for Lotus Domino for Windows Installation and Users Guide IBM Tivoli Storage Manager for Mail: Data Protection for Microsoft Exchange Server Installation and Users Guide Program Directory for IBM Tivoli Storage Manager for Mail (Data Protection for Lotus Domino) SC33-6341 SC33-6340 SC32-9056 SC32-9057 SC23-9796 GI11-8909
Support information
You can find support information for IBM products from a variety of sources.
vii
v IBM Redbooks If you still cannot find the solution to the problem, you can search forums and newsgroups on the Internet for the latest information that might help you resolve your problem. To share your experiences and learn from others in the user community, go to the Tivoli Storage Manager wiki at http://www.ibm.com/ developerworks/wikis/display/tivolistoragemanager/Home.
viii
Preface
ix
Enabled functions
Functions that were disabled in Tivoli Storage Manager V6.1.0 and V6.1.1 are now enabled in Version 6.1.2. Until Tivoli Storage Manager V6.1.2, a database that contained backup sets or tables of contents (TOCs) could not be upgraded to V6. These restrictions no longer exist. In addition, the following commands have been enabled in Version 6.1.2: v BACKUP NAS client command if the TOC parameter specifies PREFERRED or YES v BACKUP NODE if the TOC parameter specifies PREFERRED or YES v DEFINE BACKUPSET v GENERATE BACKUPSET v GENERATE BACKUPSETTOC
Licensing changes
Following the release of Tivoli Storage Manager Version 6.1.2, Tivoli Storage Manager Version 6.1.0 will no longer be available for download or purchase. Due to this unique circumstance, certain 6.1.2 packages will be available with a license module. See the following information for details on how this situation affects your environment. Existing Version 6.1.0 and 6.1.1 users If you have installed version 6.1.0 and are using a version 6.1.0 license, you can download the 6.1.2 package from the Service FTP site. You can install the 6.1.2 package using the instructions in Installing a Tivoli Storage Manager fix pack. Version 5 users If you have not yet installed a version of the V6.1 server, when you upgrade, you must upgrade directly to version 6.1.2. Version 6.1.2 is available with a license module from Passport Advantage or from your Tivoli Storage Manager sales representative. You can upgrade from V5 to V6.1.2 using the instructions in Upgrading the server. New users Version 6.1.2 is available from Passport Advantage or from your Tivoli Storage Manager sales representative. You can install version 6.1.2 using the instructions in Installing Tivoli Storage Manager.
xi
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Tivoli Storage Manager Version 6.1.0 requires the installation of StorageTek Library Attach software to utilize Sun StorageTek Automated Cartridge System Library Software (ACSLS) functions for the Windows operating system. Support for ACSLS library functions is now available for both 32-bit and 64-bit Windows operating systems in fix pack level 6.1.2.
The tsmdlst utility is now available for AIX beginning in Tivoli Storage Manager fix pack level 6.1.2. Use the tsmdlst utility to obtain information for AIX operating systems about medium-changer, tape, and optical devices controlled by the Tivoli Storage Manager device driver. With this utility you can obtain usage information, device names, serial numbers, and other device information.
The number of devices per driver is extended for AIX. The Tivoli Storage Manager device driver can configure 1024 devices for each driver.
The V6.1.2 passthru driver for HP-UX 11i v3 IA64 replaces the V6.1 passthru driver for HP-UX 11i v3 IA64. The passthru driver in V6.1.2 lets you configure 32 LUNs for each port. If you are using the passthru driver for the V6.1 version level, reconfigure existing devices using the autoconf configuration script after installing the Tivoli Storage Manager server. You must also load the esctl, estape, eschgr, and esdisk drivers
xii
| | | | | | | | | | | |
into the HP-UX kernel. The passthru driver is packaged as part of the Tivoli Storage Manager server.
Support for SAN discovery functions is now available for the Linux on zSeries operating system beginning in Tivoli Storage Manager fix pack level 6.1.2.
A new server option is available for SAN discovery functions beginning in Tivoli Storage Manager fix pack level 6.1.2. The SANDISCOVERYTIMEOUT option specifies the amount of time allowed for host bus adapters to respond when they are queried by the SAN discovery process.
A database containing backup sets or tables of contents (TOCs) cannot be upgraded to V6.1.0 or 6.1.1. The database upgrade utilities check for defined backup sets and existing TOCs. If either exists, the upgrade stops and a message is issued saying that the upgrade is not possible at the time. In addition, any operation on a V6.1 server that tries to create or load a TOC fails. When support is restored by a future V6.1 fix pack, the database upgrade and all backup set and TOC operations will be fully enabled. In the meantime, the following commands have been disabled: v BACKUP NAS client command if the TOC parameter specifies PREFERRED or YES v BACKUP NODE if the TOC parameter specifies PREFERRED or YES v DEFINE BACKUPSET v GENERATE BACKUPSET v GENERATE BACKUPSETTOC
xiii
Windows
xiv
A predefined maintenance script is one that is generated through a wizard. This script contains standard commands that cannot be altered. A predefined script can only be modified in the wizard. A custom maintenance script is created using the Administration Center maintenance script editor. To have more control of your maintenance tasks, you can modify the commands that you specify. You can also use the editor to update your custom maintenance script.
xv
Data deduplication
Data deduplication is a method of eliminating redundant data in sequential-access disk (FILE) primary, copy, and active-data storage pools. One unique instance of the data is retained on storage media, and redundant data is replaced with a pointer to the unique data copy. The goal of deduplication is to reduce the overall amount of time that is required to retrieve data by letting you store more data on disk, rather than on tape. Data deduplication in Tivoli Storage Manager is a two-phase process. In the first phase, duplicate data is identified. During the second phase, duplicate data is removed by certain server processes, such as reclamation processing of storage-pool volumes. By default, a duplicate-identification process begins automatically after you define a storage pool for deduplication. (If you specify a duplicate-identification process when you update a storage pool, it also starts automatically.) Because duplication identification requires extra disk I/O and CPU resources, Tivoli Storage Manager lets you control when identification begins as well as the number and duration of processes. You can deduplicate any type of data except encrypted data. You can deduplicate client backup and archive data, Tivoli Data Protection data, and so on. Tivoli Storage Manager can deduplicate whole files as well as files that are members of an aggregate. You can deduplicate data that has already been stored. No additional backup, archive, or migration is required. For optimal efficiency when deduplicating, upgrade to the version 6.1 backup-archive client. Restriction: You can use the data-deduplication feature with Tivoli Storage Manager Extended Edition only.
Storage devices
New device support and other changes to storage devices are available in Tivoli Storage Manager Version 6.1.
Tivoli Storage Manager Version 6.1.0 requires the installation of StorageTek Library Attach software to utilize Sun StorageTek Automated Cartridge System Library Software (ACSLS) functions for the Windows operating system. Support for ACSLS library functions is only available on 32-bit Windows operating systems in version 6.1.0.
Passthru device driver for HP-UX 11i v2 and v3 on the IA64 architecture
HP UX
The HP-UX passthru device driver replaces the Tivoli Storage Manager device driver tsmscsi and is packaged as part of the Tivoli Storage Manager server. The passthru driver can be used with versions 2 and 3 of the HP-UX 11i operating system. If you are running either of these versions, reconfigure existing devices using the autoconf configuration script after installing Tivoli Storage Manager.
xvi
Tips: v The passthru driver, Tivoli Storage Manager server, and storage agent packages are available in 64-bit mode only. v In V6.1, the passthru driver lets you configure eight LUNs for each port. The passthru driver in V6.1.2 lets you configure 32 LUNs for each port.
With Tivoli Storage Manager, you can now use HP and Quantum DAT160 (DDS6) tape drives and media. New recording formats are available for the 4MM device type.
Support for Sun StorageTek T10000 drives, T10000B drives, and T10000 media
With Tivoli Storage Manager, you can now use Sun StorageTek T10000 drives, T10000B drives, and T10000 media. New recording formats are available for the ECARTRIDGE device type. Tivoli Storage Manager supports Volsafe media with the Sun StorageTek T10000 and T10000B drives.
xvii
Server database
Tivoli Storage Manager version 6.1 provides a new server database. Advantages include automatic statistics collection and database reorganization, full-function SQL queries, and elimination of the need for offline audits of the database. Upgrading to V6.1 requires that data in a current Tivoli Storage Manager server database be extracted and then inserted into the new database structure. Tivoli Storage Manager provides utilities to perform the process.
With Tivoli Storage Manager you can create SnapMirror to Tape images of file systems on NetApp file servers. SnapMirror to Tape provides an alternative method for backing up very large NetApp file systems. Because this backup method has limitations, use this method when copying very large NetApp file systems to secondary storage for disaster recovery purposes.
Installing the Tivoli Storage Manager reporting and monitoring feature directly on a Tivoli Storage Manager Sun Solaris server is not supported. You can monitor and report on Sun Solaris Tivoli Storage Manager servers by creating a monitoring agent instance for these servers on an AIX, Linux, or Windows IBM Tivoli Monitoring server.
Solaris
xviii
xix
xx
DBMEMPERCENT
DBMEMPERCENT Sets a limit on the percentage of the system memory that is used for the database manager. By default, the percentage of the virtual address space that is dedicated to the database manager processes is set to 70 to 80 % of system RAM. To change this setting to a value from 10 to 99 %, modify the DBMEMPERCENT server option. Ensure that the value allows adequate memory for both theTivoli Storage Manager server and any other applications that are running on the system. The default value is AUTO. It is generally not necessary to change this setting on a system that is dedicated to a single Tivoli Storage Manager server. If there are other applications that require significant amounts of memory on a system, changing this setting to an appropriate amount might reduce paging and improve system performance. For systems with multiple Tivoli Storage Manager servers, changing this setting for each server is recommended. For example, this could be set to 25% for each of three servers on a system. Each server could also have a different value for this setting, as appropriate for the workload on that server.
DISKSTGPOOLMEMSIZE
The DISKSTGPOOLMEMSIZE server option specifies the size of the cache that the server can use to manage operations for storage pools with the device type of DISK. The more memory available, the less disk storage pool metadata must be retrieved from the database server. Performance might be improved during operations that store data into or delete data from disk storage pools. | | | | | | | | | | The DISKSTGPOOLMEMSIZE server option specifies, in megabytes, the size of the memory available to manage disk storage pools. Each megabyte can manage 32 gigabytes of disk storage. This option should be large enough to accommodate the maximum amount of data expected to be stored in or deleted from disk storage pools per second. For example, if a maximum of 96 gigabytes of data per second is expected to be stored in or deleted from disk storage pools, a size of 3 is recommended. If this option is not specified, it defaults to 80, which can manage 2560 gigabytes of disk storage. For 32-bit servers, it defaults to 20 which can manage 640 gigabytes of disk storage.
EXPINTERVAL
Inventory expiration removes client backup and archive file copies from the server. EXPINTERVAL specifies the interval, in hours, between automatic inventory expirations run by the Tivoli Storage Manager server. The default is 24. Backup and archive copy groups can specify the criteria that make copies of files eligible for deletion from data storage. However, even when a file becomes eligible for deletion, the file is not deleted until expiration processing occurs. If expiration processing does not occur periodically, storage pool space is not reclaimed from expired client files, and the Tivoli Storage Manager server requires increased disk storage space. | | | | | Expiration processing is CPU and database I/O intensive. If possible, it should be run when other Tivoli Storage Manager processes are not occurring. To enable this, set EXPINTERVAL to 0 and either schedule expiration to occur once each day, or manually start the process with the EXPIRE INVENTORY server command. Expiration processing can be scheduled by defining an administrative schedule. When using the DURATION parameter on an administrative schedule, periodically check that expiration is actually completing within the specified time. This is the recommended setting:
EXPINTERVAL 0
This setting specifies that there is no expiration processing. Use an administrative schedule to run expiration at an appropriate time each day.
MAXSESSIONS
The MAXSESSIONS option specifies the maximum number of simultaneous client sessions that can connect with the Tivoli Storage Manager server. The default value is 25 client sessions. The minimum value is 2 client sessions. The maximum value is limited only by available virtual memory or communication resources. By limiting the number of clients, server performance can be improved, but the availability of Tivoli Storage Manager services to the clients is reduced. | | A typical production Tivoli Storage Manager server could have the MAXSESSIONS parameter set to 100 or greater.
RESTOREINTERVAL
The RESTOREINTERVAL option specifies how long, in minutes, that a restartable restore session can be in the database before it can be expired. Restartable restores allow restores to continue after an interruption without starting at the beginning. Restartable restores reduce duplicate effort or manual determination of where a restore process was terminated. The RESTOREINTERVAL option defines the amount of time an interrupted restore can remain in the restartable state. The minimum value is 0. The maximum is 10080 (one week). The default is 1440 (24 hours). If the value is set to 0 and the restore is interrupted or fails, the restore is still put in the restartable state. However, it is immediately eligible to be expired. Restartable restore sessions consume resources on the Tivoli Storage Manager server. You should not keep these sessions any longer than they are needed. It is recommended that you tune the RESTOREINTERVAL option to your environment.
TCPNODELAY
The TCPNODELAY server option specifies whether the server allows data packets that are less than the maximum transmission unit (MTU) size to be sent out immediately over the network. When TCPNODELAY is set to NO, the server buffers data packets that are less than the MTU size: v Buffering can improve network utilization. v Buffering requires a delay that can impact session throughput greatly. When set to YES, it disables the TCP/IP Nagle algorithm, which allows data packets less than the MTU size to be sent out immediately. Setting this option to YES might improve performance in higher speed networks. The default is YES. This is the recommended setting:
TCPNODELAY YES
Note: This option also exists on the Tivoli Storage Manager client.
TCPWINDOWSIZE
The TCPWINDOWSIZE server option specifies the amount of receive data in kilobytes that can be on a TCP/IP connection at one time. The TCPWINDOWSIZE server option applies to backups and archives. The TCPWINDOWSIZE client option applies to restores and retrieves. The sending host cannot send more data until an acknowledgement and TCP receive window update are received. Each TCP packet contains the advertised TCP receive window on the connection. A larger window allows the sender to continue sending data, and might improve communication performance, especially on fast networks with high latency. The TCPWINDOWSIZE option overrides the operating systems TCP send and receive spaces. In AIX, for instance, these parameters are tcp_sendspace and tcp_recvspace and are set as no options. For Tivoli Storage Manager, the default is 63 KB, and the maximum is 2048 KB. Specifying TCPWINDOWSIZE 0, results in Tivoli Storage Manager using the operating system default. This is not
recommended because the optimal setting for Tivoli Storage Manager might not be same as the optimal setting for other applications. The TCPWINDOWSIZE option specifies the size of the TCP sliding window for all clients and all servers. On the server this applies to all sessions. Therefore, raising TCPWINDOWSIZE can increase memory significantly when there are multiple, concurrent sessions. A larger window size can improve communication performance, but uses more memory. It enables multiple frames to be sent before an acknowledgment is obtained from the receiver. If long transmission delays are being observed, increasing the TCPWINDOWSIZE might improve throughput. For all platforms, rfc1323 must be set to have window sizes larger than 64 KB-1: v AIX: Use no -o rfc1323=1 v HP-UX: Using a window size greater than 64 KB-1 automatically enables large window support. v Sun Solaris 10: Use ndd set /dev/tcp tcp_wscale_always 1 This should be enabled by default. v Linux: Should be on by default for recent kernel levels. Check with cat /proc/sys/net/ipv4/tcp_window_scaling. Recent Linux kernels use autotuning, and changing TCP values might have a negative effect on autotuning. Make changes with caution. v Windows XP and 2003: Add or modify, with regedit, the following registry name/value pair under [HKEY_LOCAL_MACHINE\SYSTEM\ CurrentControlSet\Services\Tcpip\Parameters] Tcp1323Opts, REG_DWORD, 3 Attention: Before modifying the registry name and value pair, you should back up the entire registry. These are the recommended settings:
TCPWINDOWSIZE 63
TXNGROUPMAX
The TXNGROUPMAX server option specifies the number of objects that are transferred between a client and server in a single transaction. The minimum value is 4 objects, and the maximum value is 65,000 objects. The default value has now been set 4096 objects. An object is a file or directory. It is possible to affect the performance of client backup, archive, restore, and retrieve operations by using a larger value for this option: 1. Increasing the value of the TXNGROUPMAX option can improve throughput for operations storing data directly to tape, especially when storing a large number of objects. 2. If you increase the value of the TXNGROUPMAX option by a large amount, watch for possible effects on the recovery log. A larger value for the TXNGROUPMAX option can result in increased utilization of the recovery log, as well as an increased length of time for a transaction to commit. If the effects are severe enough, they can lead to problems with operation of the server. For more information on managing the recovery log, see the Administrators Guide.
Chapter 2. IBM Tivoli Storage Manager server performance tuning
3. A larger value of the TXNGROUPMAX option can also increase the number of objects that must be resent if the transaction is aborted because an input file changed during backup, or because a new storage volume was required. The larger the value of the TXNGROUPMAX option, the more data that must be resent. 4. Increasing the TXNGROUPMAX value affects the responsiveness of stopping the operation, and the client might have to wait longer for the transaction to complete. You can override the value of this option for individual client nodes. See the TXNGROUPMAX parameter in the REGISTER NODE and UPDATE NODE commands. This option is related to the TXNBYTELIMIT option in the client options file. TXNBYTELIMIT controls the number of bytes, as opposed to the number of objects, that are transferred between transaction commit points. At the completion of transferring an object, the client commits the transaction if the number of bytes transferred during the transaction reaches or exceeds the value of TXNBYTELIMIT, regardless of the number of objects transferred. Set TXNGROUPMAX to 256 in your server options file. Settings higher than 4096 typically provide no benefit. If some clients have small files and go straight to a tape storage pool, then raising TXNGROUPMAX to the higher value can benefit those clients.
It is best to use multiple directories for the database, with up to 4 or 8 directories for a large Tivoli Storage Manager database. Each database directory should be located on a disk volume that uses separate physical disks from other database directories. The Tivoli Storage Manager server database I/O workload is spread over all directories, thus increasing the read and write I/O performance. A large number of small capacity physical disks is better than a small number of large capacity physical disks with the same rotation speed. The access pattern for the active log is always sequential. Physical placement on the underlying disk is very important. It is best to isolate the log from the database and from the disk storage pools. If this cannot be done, then place the log with storage pools and not with the database. Active log mirroring provides higher reliability, but comes at a cost in performance. Locate the mirror log directory on a disk volume that uses separate physical disks from the active log by using the MIRRORLOGDIR parameter in the DSMSERV FORMAT command. After installation, change the mirror log directory location by changing the value of the MIRRORLOGDIR option in the server option file and restarting the server.
Configuration parameters
The primary considerations for the Tivoli Storage Manager database are: enough memory for the database manager server, configuration of enough physical disks to handle the I/O requirements, and enough CPUs to handle the workload. Self tuning memory For the best database manager operation remember, that the data and indexes are manipulated in the database buffer pools allocated in memory. Performance is stifled by a lot of paging operations required when there is more buffer pool space defined than real memory. Beside the buffer pools which use the most memory, the sort list, the lock list and the package cache are other memory segments allocated by the database manager. The database manager supported with Tivoli Storage Manager version 6.1 enables self tuning memory by default. This automatically samples the workload and performance characteristics of the database. Using this feature the database manager adapts the sort heap, lock list, package cache, buffer pool and total database memory improving performance and through put as your environment requires. Tivoli Storage Manager mixes the database workload between transactions recording backups to heavy query usage during restore operations.
10
| | | | | | | |
Enough disk to handle I/O Efficient performance relies on having enough physical disk drives to service the throughput required for the workload. The correct relationship of physical disks to CPU in the database manager server helps to maintain good performance. One of the most CPU and database intensive parts of the Tivoli Storage Manager server workload is inventory expiration. There should be a ratio of one database directory, array, or LUN for each inventory expiration process. CPUs to handle the workload The power of the database manager system depends on the number and speed of its CPUs. For a balanced system, under normal operations only consume approximately 80% of the available CPUs.
Backup performance
When possible, limit the number of versions of any backup file to the minimum required. File backup performance is degraded when there are many versions of an object. Use the DEFINE COPYGROUP command and modify the VEREXISTS parameter to control the number of versions, or use the UPDATE COPYGROUP command. The default number of backup versions is 2. If the retention requirements in your environment differ among client machines, use different copy groups rather than taking the lowest common denominator. For example, if your accounting machines require records to be kept for seven years, but other machines need data kept for only two years, do not specify seven for all. Instead, create two separate copy groups. Not only are backups potentially faster, but you also consume less storage because you are not keeping data that you do not need.
11
See EXPINTERVAL on page 4 for information about the EXPINTERVAL server option.
LAN-free backup
Using LAN-free backup can improve performance. To do so requires the Tivoli Storage Manager storage agent on the client for LAN-free backups to SAN attached tape, and Tivoli SANergy if backups are to be sent to FILE volumes on SAN-attached disk. v Back up and restore to tape or disk using the SAN. The advantages are: Metadata is sent to the server using the LAN while client data is sent over the SAN. Frees the Tivoli Storage Manager server from handling data leading to better scalability. Potentially faster than LAN backup and restore. Better for large file workloads, databases (Data Protection). Small file workloads have bottlenecks other than data movement. v Ensure that there are sufficient data paths to tape drives. v Do not use LAN-free backup if you bundle more than 20 separate dsmc commands in a script. dsmc start/stop overhead is higher due to tape mounts. Use the new file list feature to back up a list of files.
12
13
14
Modifying the IBM Tivoli Monitoring environment file for reporting performance
Using the environment file that was automatically created for you when you added a Tivoli Storage Manager monitoring agent instance, you can improve reporting performance by modifying the environment variables. | | | | | | | | | The Windows environment file is named KSKENV_xxx, where xxx is the instance name of the monitoring agent you created. This file is located in the IBM Tivoli Monitoring installation directory. (for example: \IBM\ITM\TMAITM6) The AIX and Linux environment files are named sk_xxx.config, where xxx is the instance name of the monitoring agent you created. This file is located in the /opt/tivoli/tsm/ reporting/itm/config directory on both Linux and AIX systems The following list contains the environment variables that you can change to modify the performance for your monitoring agent. Use any text editor to edit the file. KSK_PREFETCH_MINUTES, Default value=30 The delays in minutes that the Tivoli Storage Manager server is queried for certain attribute groups. You can modify this variable in the following ways: v Make this value larger to reduce the number of times that the agent queries the Tivoli Storage Manager server over a 24 hour period v Reduce the value to increase the frequency of Tivoli Storage Manager server queries. This value is for all attribute groups collected by the agent. KSK_PREFETCH_RETENTION_DAYS, Default Value=2 The number of days that the pre-fetch data is stored in the pre-fetch cache. The pre-fetch cache is for short-term storage of Tivoli Storage Manager data that is transferred later to the data warehouse. Two days is normally a sufficient amount of time for this variable. KSK_MAXIMUM_ROWS_RETURNED, Default Value=2500 The maximum number of rows that are returned at any one time to the IBM Tivoli Monitoring server. Changing this value could cause your Tivoli Enterprise Portal to receive so many rows of data that it will not be able to
15
display them. The speed of the processor and the amount of memory installed in the IBM Tivoli Monitoring server dictates the value of this variable. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Important: Do not increase this variable to a value greater than 3500 rows to prevent data overflow. KSK_APIHRLIMIT, Default value=1 The age, in hours, of the data that is collected by the Tivoli Storage Manager and common reporting agent. Do not increase this value unless you are running the agent on a very high performing server. KSK_APIBUFFER, Default value=50 000 The maximum number of rows that are retrieved from the Tivoli Storage Manager database at any one time. This value should be set to 50 000 rows or less. If the total number of rows defined by this value exceeds the total number of rows in the Tivoli Storage Manager database, no data is returned. KSK_APITIMEOUT, Default value=480 The amount of time, in minutes, before the Tivoli Storage Manager Administrators API times out. In addition to the agent variables that you can change in the environment variable file, there are also two other values that are important when tuning the reporting servers. These values are modified in the Tivoli Enterprise Portal history configuration panels and they are: Warehouse interval, Default value=daily Specifies how often the data that has been collected from IBM Tivoli Monitoring is sent to the Tivoli Data Warehouse for historical data storage. Possible values are hourly, daily, and off. Collection interval, Default value=15 Specifies the length of time between requests from IBM Tivoli Monitoring to the Tivoli Storage Manager Tivoli Common Reporting data collection agent. This value should be twice the value of the KSK_PREFETCH_MINUTES variable. Possible values are 1, 5, 15, 30 minutes, hourly, daily. Tip: Keep the KSK_MAXIMUM_ROWS_RETURNED, and the KSK_APIBUFFER variables as low as possible to prevent data overflows on the Tivoli Storage Manager Tivoli common Reporting agent or the Tivoli Enterprise Portal.
16
17
v When altering the read ahead parameter, you must also alter the maxfree parameter so that there is enough free memory to store the read ahead data. v The following equation must be true:
minfree + maxpgahead maxfree
To calculate minfree and maxfree use these formulae: minfree = 120 multiplied by the number of processors (or default if larger) maxfree = 120 + maxpgahead (or j2_maxPageReadAhead) multiplied by the number of processors (or the default if larger) This does not improve read performance on raw logical volumes or JFS2 volumes on the Tivoli Storage Manager server. The server uses direct I/O on JFS2 file systems. v Using raw logical volumes for the server can cut CPU consumption, but doing so might be slower during storage pool migration due to lack of read ahead.
18
HP-UX server
Use a raw partition for disk storage pools on HP-UX Tivoli Storage Manager server. Using a raw partition can improve performance. Raw partition volumes offer better backup and restore throughput than when VXFS volumes are used on HP-UX.
Linux server
Disable any unneeded daemons (services). Most enterprise distributions come with many features. However, most of the time only a small subset of these features are used. For example, the TCP/IP data movement could be blocked or slowed down significantly by the internal firewall in SUSE 9 x86_64. It can be ended by /etc/init.d/SuSEfirewall2_setup stop.
19
| |
because the data is not first copied to file system buffers. We recommend using VxFS file systems mounted with the direct I/O option (mincache=direct). v When UFS file system volumes are used, mount these file systems using the forcedirectio flag. If the file system is mounted using forcedirectio, data is transferred directly between user address space and the disk. If the file system is mounted using noforcedirectio, data is buffered in kernel address space when data is transferred between user address space and the disk. The forcedirectio performance option benefits only from large sequential data transfers. The default behavior is noforcedirectio.
Windows server
There are a number of actions that can improve performance for a Tivoli Storage Manager server running in a Windows environment. v Use a 64-bit version of the Windows Server 2003 or Windows Server 2008 to realize the following benefits: a larger virtual memory-address space, support for a larger physical RAM, and improved performance. v Use the NTFS file system for the disk storage required by the Tivoli Storage Manager server, including the database directories, active log directory, archive log directory, and storage pool volumes. NTFS has the following advantages: Support for larger disk partitions Better data recovery Better file security Formatting storage pool volumes on NTFS is much faster v NTFS file compression should not be used on disk volumes that are used by the Tivoli Storage Manager server, because of the potential for performance degradation. v For optimal Tivoli Storage Manager for Windows server performance with respect to Windows real memory usage, use the server property setting for Maximize Throughput for Network Applications. This setting gives priority to application requests for memory over requests from the Cache Manager for file system cache. This setting will make the most difference in performance on systems that are memory constrained. v For optimal backup and restore performance when using a local client on a Windows system, use the shared memory communication method. This is done by including the COMMMETHOD SHAREDMEM option in the server options file and in the client options file. v Here are other actions that can affect Tivoli Storage Manager client and server performance: Be aware that antivirus software can negatively affect backup performance. Disable or do not install unused services. Disable or do not install unused network protocols. Give preference to background application performance. Avoid screen savers. Ensure that the paging file is not fragmented. Ensure that device drivers are current, especially for new hardware.
20
v Throughput for backup and restore of small file workloads is basically independent of network type, as long as the network remains unsaturated, and propagation delays are not excessive due to intervening routers or switches. v Gigabit Ethernet performance is highly dependent on the quality of the Ethernet chipset and the type of bus used. In addition, taking advantage of certain chipset features, such as jumbo frames and other TCP offload features, can have a large impact on performance. Therefore, performance can vary widely. On some chipsets and machines only 25% efficiency may be possible while on others 90% is easily reached.
21
You can use the FORMAT option of the DEFINE DEVCLASS command to specify the appropriate recording format to be used when writing data to sequential access media. The default is DRIVE, which specifies that Tivoli Storage Manager selects the highest format that can be supported by the sequential access drive on which a volume is mounted. This setting usually allows the tape control unit to perform compression. Tip: Avoid specifying the DRIVE value when a mixture of devices are used in the same library. For example, if you have drives that support recording formats superior to other drives in a library, do not specify the FORMAT=DRIVE option. Refer to the appropriate Tivoli Storage Manager Administrators Guide for more information If you do not use compression at the client and your data is compressible, you should achieve higher system throughput if you use compression at the tape control unit. Refer to the appropriate Tivoli Storage Manager Administrators Guide for more information concerning your specific tape drive. If you compress the data at the client, we recommend that you not use compression at the tape drive. In this case, you might lose up to 10-12% of the tape capacity at the tape media.
22
Client options
TXNBYTELIMIT 2097152
If on average Tivoli Storage Manager clients have files smaller than 100 KB, it is recommended that these clients back up to a disk storage pool for later migration
Chapter 2. IBM Tivoli Storage Manager server performance tuning
23
Busses
If your server has multiple PCI busses, distribute high-throughput adaptors among the different busses. For systems with busses that have different speeds, match the adapter to the appropriate buss based on the speed. For example, if you are going to do a lot of backups to disk, you probably do not want your network card and disk adaptor on the same PCI bus. Theoretical limits of busses are just that, theoretical, though you should be able to get close in most cases. As a general rule, it is best to have only one or two tape drives per SCSI bus and one to four tape drives per fibre host bus adapter (HBA). Mixing tape and disk on the same fibre HBA is not recommended. Even if a given tape drive is slower than the fibre channel SAN being used, tape drive performance is usually better on the faster interfaces. This is because the individual blocks are transferred with lower latency. This allows Tivoli Storage Manager and the operating system to send the next block more quickly. For example, an LTO 4 drive perform better on a 4 Gbit SAN than a 2 Gbit SAN even though the drive is only capable of speeds under 2 Gbit.
24
COMPRESSALWAYS
The COMPRESSALWAYS option specifies whether to continue compressing an object if it grows during compression, or resend the object, uncompressed. This option is used with the compression option. The COMPRESSALWAYS option is used with the archive, incremental, and selective commands. This option can also be defined on the server. If COMPRESSALWAYS YES (the default) is specified, files continue compression even if the file size increases. To stop compression if the file size grows, and resend the file, uncompressed, specify COMPRESSALWAYS NO. This option controls compression only if your administrator specifies that your client node determines the selection. To reduce the impact of retries, use COMPRESSALWAYS YES.
25
It is better to identify common types of files that do not compress well and list these on one or more client option EXCLUDE.COMPRESSION statements. Files that contain large amounts of graphics, audio, or video files and files that are already encrypted do not compress well. Even files that seem to be mostly text data (for example, Microsoft Word documents) can contain a significant amount of graphic data that might cause the files to not compress well. Using Tivoli Storage Manager client compression and encryption for the same files is valid. The client first compresses the file data and then encrypts it, so that there is no loss in compression effectiveness due to the encryption, and encryption is faster if there is less data to encrypt. For example, to exclude objects that are already compressed or encrypted, enter the following statements:
exclude.compression exclude.compression exclude.compression exclude.compression exclude.compression ?:\...\*.gif ?:\...\*.jpg ?:\...\*.zip ?:\...\*.mp3 ?:\...\*.cab
COMPRESSION
The COMPRESSION client option specifies if compression is enabled on the Tivoli Storage Manager client. For optimal backup and restore performance with a large number of clients, consider using client compression. Compressing the data on the client reduces demand on the network and the Tivoli Storage Manager server. The reduced amount of data on the server continues to provide performance benefits whenever this data is moved, such as for storage pool migration and storage pool backup. However, client compression significantly reduces the performance of each client, and the reduction is more pronounced on the slowest client systems. For optimal backup and restore performance when using fast clients and heavily loaded network or server, use client compression. For optimal backup and restore performance when using a slow client, or a lightly loaded network or server, do not use compression. However, be sure to consider the trade-off of greater storage requirements on the server when not using client compression. The default for the COMPRESSION option is NO. For maximum performance with a single fast client, fast network, and fast server, turn compression off. Two alternatives exist to using compression: v If you are backing up to tape, and the tape drive supports its own compression, use the tape drive compression. See Tuning tape drive performance on page 21 for more information. v Do not use compression if a client has built-in file compression support. Compression on these clients does reduce the amount of data backed up to the server. NetWare and Windows have optional built-in file compression. Compression can cause severe performance degradation when there are many retries due to failed compression attempts. Compression fails when the compressed file is larger than the original. The client detects this and will stop the compression, fail the transaction and resend the entire transaction uncompressed. This occurs
26
because the file type is not suitable for compression or the file is already compressed (zip files, tar files, and so on). Short of turning off compression, there are two options you can use to reduce or eliminate retries due to compression: v Use the COMPRESSALWAYS option. This option eliminates retries due to compression. v Use the EXCLUDE.COMPRESSION option in the client options file. This option disables compression for specific files or sets of files (for example, zip files or jpg files). Look in the client output (dsmsched.log) for files that are causing compression retries and then filter those file types. These are the recommended settings: v For a single fast client, fast network, fast server
COMPRESSION NO
DISKBUFFSIZE
The DISKBUFFSIZE client option specifies the maximum disk I/O buffer size (in kilobytes) that the client may use when reading files. Optimal backup, archive, or HSM migration client performance may be achieved if the value for this option is equal to or smaller than the amount of file read ahead provided by the client file system. A larger buffer will require more memory and might not improve performance. The default value is 32 for all clients except AIX. For AIX, the default value is 256 except when ENABLELANFREE YES is specified. When ENABLELANFREE YES is specified on AIX, the default value is 32. API client applications have a default value of 1023, except for Windows API client applications (version 5.3.7 and later), which have a default value of 32. The recommended setting is to use the default value for the client platform. | | | | | | | | | | | | | | | |
MEMORYEFFICIENTBACKUP
The MEMORYEFFICIENTBACKUP client option specifies the method Tivoli Storage Manager uses during database backups to determine which objects are new or changed and must be backed up, and which objects are deleted and must be expired. The memory required by the client depends on the method used and the number of objects in the client file systems. An object is a file or a directory. To choose a value for the MEMORYEFFICIENTBACKUP option, begin by determining the number of objects in the client file systems and rounding that number up to the nearest million. For example, if your client file systems have 4,200,000 objects, round up to 5,000,000. You would then use 5 as the value of numobjs. Follow the steps below in sequence, and use the option parameter (YES, NO, or DISKCACHEMETHOD) for the first step that applies. For example, if a 64 bit backup-archive client has 2 GB of real memory available for use by the client process and the value of numobjs is 5, you would specify NO.
Chapter 3. IBM Tivoli Storage Manager client performance tuning
27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1. If the client system is using the 32-bit backup-archive client, and numobjs is less than or equal to 5, and at least numobjs x 300 MB of real memory is available for use by the client process, then specify NO, which is the default. 2. If the client system is using the 64-bit backup-archive client, and at least numobjs x 300 MB of real memory is available for use by the client process, then specify NO, which is the default. 3. If the client system has at least the following amount of fast temporary disk storage available for use by the client process, specify DISKCACHEMETHOD: v UNIX or Linux: numobjs x 300 MB v Windows: numobjs x 600 MB v Mac OS X: numobjs x 1200 MB By default, this disk storage space must be available on the volumes being backed up, or you must specify the DISKCACHELOCATION option with the path to the available space. 4. If none of the above apply, specify YES. Note: Using MEMORYEFFICIENTBACKUP YES can increase the work required on the Tivoli Storage Manager server. The result can be a significant increase in the incremental backup elapsed time, particularly if this option is used for a large number of clients each with a large number of directories. Here are some alternatives to using the MEMORYEFFICIENTBACKUP option to reduce client memory consumption: v Use the client include/exclude options to back up only what is necessary. v Use journal based incremental backup for Windows (NTFS) and AIX (JFS2) clients only. v Use the VIRTUALMOUNTPOINT option (UNIX only) to define multiple virtual mount points within a single file system, and back up these mount points sequentially. v Spread the data across multiple file systems and back up these file systems sequentially v Use the image backup function to backup the entire volume. This might require less time and resources than using incremental backup on some file systems with a large number of small files. The recommended setting is to use the default:
MEMORYEFFICIENTBACKUP NO
PROCESSORUTILIZATION
The PROCESSORUTILIZATION option (only on the Novell client) specifies, in hundredths of a second, the amount of CPU time the length of time Tivoli Storage Manager controls the CPU. Because this option can affect other applications on your client node, use it only when speed is a high priority. The default is 1. The recommended values are from 1 to 20. If set to less than 1, this parameter could have a negative impact on performance. Increasing this value increases the priority ofTivoli Storage Manager to the CPU, lessening the priorities other process. Setting PROCESSORUTILIZATION greater than 20 might prevent other scheduled processes or NetWare requestors from accessing the file server.
28
QUIET
The QUIET client option prevents messages from being displayed during Tivoli Storage Manager backups. The default is the VERBOSE option, and Tivoli Storage Manager displays information about each file it backs up. To prevent this, use the QUIET option. However, messages and summary information are still written to the log files. There are two main benefits to using the QUIET option: v For tape backup, the first transaction group of data is always resent. To avoid this, use the QUIET option to reduce retransmissions at the client. v If you are using the client scheduler to schedule backups, using the QUIET option dramatically reduces disk I/O overhead to the schedule log and improves throughput.
RESOURCEUTILIZATION
The RESOURCEUTILIZATION client option regulates the number of concurrent sessions that the Tivoli Storage Manager client and server can use during processing. Multiple sessions can be initiated automatically through a Tivoli Storage Manager backup, restore, archive, or retrieve command. Although the multiple session function is transparent to the user, there are parameters that enable the user to customize it. The RESOURCEUTILIZATION option increases or decreases the ability of the client to create multiple sessions. For backup or archive, the value of RESOURCEUTILIZATION does not directly specify the number of sessions created by the client. However, the setting does specify the level of resources the server and client can use during backup or archive processing. The higher the value, the more sessions the client can start. The range for the parameter is from 1 to 10. If the option is not set, by default only two sessions can be started, one for querying the server and one for sending file data. A setting of 5 permits up to four sessions: two for queries and two for sending data. A setting of 10 permits up to eight sessions: four for queries and four for sending data. The relationship between RESOURCEUTILIZATION and the maximum number of sessions created is part of an internalized algorithm and, as such, is subject to change. This table lists the relationships between RESOURCEUTILIZATION values and the maximum sessions created. Producer sessions scan the client system for eligible files. The remaining sessions are consumer sessions and are used for data transfer. The threshold value affects how quickly new sessions are created.
Recommendations RESOURCEUTILIZATION value Maximum number of sessions 1 2 3 3 4 4 Unique number of producer sessions 0 1 1 1 2 2 Threshold (seconds) 45 45 45 30 30 20
1 2 3 4 5 6
29
Recommendations RESOURCEUTILIZATION value Maximum number of sessions 5 6 7 8 2 Unique number of producer sessions 2 2 3 4 1 Threshold (seconds) 20 20 20 10 30
7 8 9 10 0 (default)
Backup throughput improvements that can be achieved by increasing the RESOURCEUTILIZATION level vary from client node to client node. Factors that affect throughputs of multiple sessions include the configuration of the client storage subsystem (the layout of file systems on physical disks), the clients ability to drive multiple sessions (sufficient CPU, memory), the servers ability to handle multiple client sessions (CPU, memory, number of storage pool volumes), and sufficient bandwidth in the network to handle the increased traffic. The MAXSESSIONS parameter controls the maximum number of simultaneous client sessions with the Tivoli Storage Manager server. The total number of parallel sessions for a client is counted for the maximum number of sessions allowed with the server. You need to decide whether to increase the value of the MAXSESSIONS parameter in the server option file. When using the RESOURCEUTILIZATION option to enable multiple client/server sessions for backup direct to tape, the client node maximum mount points allowed parameter, MAXNUMMP, must also be updated at the server (using the UPDATE NODE command). If the client file system is spread across multiple disks (RAID 0 or RAID 5), or multiple large file systems, the recommended RESOURCEUTILIZATION setting is a value of 5 or 10. This enables multiple sessions with the server during backup or archive and can result in substantial throughput improvements in some cases. It is not likely to improve incremental backup of a single large file system with a small percentage of changed data. If a backup is direct to tape, the client node maximum mount points allowed parameter, MAXNUMMP, must also be updated at the server using the update node command. RESOURCEUTILIZATION can be set to value other than default if a client backup involves many files and they span or reside on multiple physical disks. A setting of 5 or greater is recommended. However, for optimal utilization of the Tivoli Storage Manager environment, you need to evaluate the load of server, network bandwidth, client CPU and I/O configuration and take that into consideration before changing the option. When a restore is requested, the default is to use a maximum of two sessions, based on how many tapes the requested data is stored on, how many tape drives are available, and the maximum number of mount points allowed for the node. The default value for the RESOURCEUTILIZATION option is 1, and the maximum value is 10. For example, if the data to be restored are on five different tape volumes, and the maximum number of mount points for the node requesting the restore is five, and RESOURCEUTILIZATION is set to 3, then three sessions are
30
used for the restore. If the RESOURCEUTILIZATION setting is increased to 5, then five sessions are used for the restore. There is a one-to-one relationship to the number of restore sessions allowed and the RESOURCEUTILIZATION setting. Here are the recommended settings: For workstations
RESOURCEUTILIZATION 1
TAPEPROMPT
The TAPEPROMPT client option specifies whether to prompt the user for tape mounts. The TAPEPROMPT option specifies if you want Tivoli Storage Manager to wait for a tape to be mounted for a backup, archive, restore or retrieve operation, or to prompt you for your choice. The recommended setting is:
TAPEPROMPT NO
TCPBUFFSIZE
The TCPBUFFSIZE option specifies the size of the internal TCP communication buffer, that is used to transfer data between the client node and the server. A large buffer can improve communication performance, but requires more memory. The default is 32 KB, and the maximum is now 512 KB. The recommended setting is:
TCPBUFFSIZE 32
TCPNODELAY
Use the TCPNODELAY option to disable the TCP/IP Nagle algorithm, which allows data packets of less than the Maximum Transmission Unit (MTU) size to be sent out immediately. The default is YES. This generally results in better performance for Tivoli Storage Manager client/server communications. The recommended setting is:
TCPNODELAY YES
31
TCPWINDOWSIZE
The TCPWINDOWSIZE client option specifies the size of the TCP sliding window in kilobytes. The TCPWINDOWSIZE option specifies, in kilobytes, the amount of receive data that can be buffered at one time on a TCP/IP connection. The sending host cannot send more data until it receives an acknowledgment and a TCP receive window update. Each TCP packet contains the advertised TCP receive window on the connection. A larger window lets the sender continue sending data, and may improve communication performance, especially on fast networks with high latency. The TCPWINDOWSIZE option is valid for all Tivoli Storage Manager clients and servers. The TCPWINDOWSIZE option overrides the operating systems default TCP/IP session send and receive window sizes. For AIX, the defaults are set as no options tcp_sendspace and tcp_recvspace. For Solaris, the defaults are set via the tcp_xmit_hiwat and tcp_recv_hiwat tunable parameters. Specifying TCPWINDOWSIZE 0 causes Tivoli Storage Manager to use the operating system default. This is not recommended because the optimal setting for Tivoli Storage Manager might not be same as the optimal setting for other applications. The default is 63 KB, and the maximum is 2048 KB. The recommended setting is:
TCPWINDOWSIZE 63
TXNBYTELIMIT
The TXNBYTELIMIT client option specifies the maximum transaction size in kilobytes for data transferred between the client and server. The range of values is 300 KB through 2097152 KB (2 GB); the default is 25600. A transaction is the unit of work exchanged between the client and server. Because the client program can transfer more than one file or directory between the client and server before it commits the data to server storage, a transaction can contain more than one file or directory. This is called a transaction group. This option permits you to control the amount of data sent between the client and server before the server commits the data and changes to the server database, thus affecting the speed with which the client performs work. The amount of data sent applies when files are batched together during backup or when receiving files from the server during a restore procedure. The server administrator can limit the number of files or directories contained within a group transaction using the TXNGROUPMAX option. The actual size of a transaction can be less than your limit. Once this number is reached, the client sends the files to the server even if the transaction byte limit is not reached. There are several items to consider when setting this parameter: v Increasing the amount of data per transaction increases recovery log requirements on the server. Check log and log pool space to ensure there is enough space. Also note that a larger log might result in longer server start-up times. v Increasing the amount of data per transaction might result in more data being retransmitted if a retry occurs. This might negatively affect performance.
32
v The benefits of changing this parameter are dependent on configuration and workload characteristics. In particular, this parameter benefits tape storage pool backup more so than disk storage pool backup, especially if many small files are in the workload. When setting the size of transactions consider setting a smaller size if you are suffering many resends due to files changing during backup when using static, shared static, or shared dynamic. This would apply to static as well as to shared because when the client realizes a file has changed during a backup and decides to not send it, the file that is, it would still have to re-send the other files in that transaction. To enhance performance, set TXNBYTELIMIT to the max 2097152, and on the server, raise TXNGROUPMAX to 256. Additionally, for small file workloads, first stage the backups to a disk storage pool and then migrate to tape. The recommended settings are:
TXNBYTELIMIT 25600
33
If all the files are on random disk, only one session is used. There is no multi-session restore for a random disk-only storage pool restore. However, if you are performing a restore in which the files reside on four tapes or four sequential disk volumes and some on random disk, you can use up to five sessions during the restore. You can use the MAXNUMMP parameter to set the maximum number of mount points a node can use on the server. If the RESOURCEUTILIZATION option value exceeds the value of the MAXNUMMP on the server for a node, you are limited to the number of session specified in MAXNUMMP. If the data you want to restore is on five different tape volumes, the maximum number of mount points is 5 for your node, and RESOURCEUTILIZATION is set to three, then three sessions are used for the restore. If you increase the RESOURCEUTILIZATION setting to 5, then 5 sessions are used for the restore. There is a one to one relationship to the number of restore sessions allowed for the RESOURCEUTILIZATION setting. Multiple restore sessions are only allowed for no-query-restore operations. The server sends the MAXNUMMP value to the client during sign-on. During an no-query restore, if the client receives a notification from the server that another volume has been found and another session can be started to restore the data, the client checks the MAXNUMMP value. If another session would exceed that value, the client will not start the session. Some backup considerations: v Only one session per file system compares attributes for incremental backup. Incremental backup throughput does not improve for a single file system with a small amount of changed data. v Data transfer sessions do not have file system affinity; each session could send files from multiple file systems. This is good for workload balancing. This is not so good if you are backing up directly to a tape storage pool collocated by filespace. Do not use multiple sessions to back up directly to a storage pool collocated by filespace. Use multiple commands, one per filespace. v Multiple sessions might not start if there are not enough entries on the transaction queue. v For backup operations directly to tape, you can prevent multiple sessions so that data is not spread across multiple volumes by setting RESOURCEUTILIZATION to 2. Some restore considerations: v Only one session is used when restoring from random access disk storage pools. v Only one file system can be restored at a time with the command line; multiple sessions may still be used for a single file system. v Even small clients can gain throughput for restores requiring many tape mounts or locates. v Tape cartridge contention might occur, especially when restoring from a collocated node.
34
Macintosh client
Limit the use of Extended Attributes. When Extended Attributes are used, limit their length. Antivirus software can negatively affect backup and restore performance
Windows client
Performance recommendations for Windows clients include the shared memory communication method and the use of antivirus products. v For optimal backup and restore performance when using a local client on a Windows system, use the shared memory communication method. Specify COMMMETHOD SHAREDMEM in both the server options file and the client options file. v Antivirus products and backup and restore products can use significant amounts of system resources and therefore impact application and file system performance. They may also interact with each other to seriously degrade the performance of either product. For optimal performance of backup and restore:
35
Schedule antivirus file system scans and incremental backups for non-overlapping times. If the antivirus program allows, change the antivirus program properties so that files are not scanned when opened by the client processes. Some antivirus products can automatically recognize file reads by backup products and do not need to be configured. Check the IBM support site for additional details.
36
37
38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Processing capacity
You can meet an Administration Center environment with high performance requirements with a dual-processor system having a speed of 3 GHz or faster. The Administration Center server can use additional processors, but there might not be a noticeable effect on the application performance. Estimate the Administration Center processor utilization by using the following equation:
CpuUtilization (%) = 0.15 + TasksCompleted (per Hour) * 0.006
The tasks completed per hour rate is the highest total number of tasks per hour expected to be run using the Administration Center server. It includes tasks run by all administrators logged in at the time. A task is best thought of as the minimum amount of interaction within the administration interface that produces some usable information or completes a desired operation. The number of tasks run per
Copyright IBM Corp. 1996, 2009
39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
hour by a single administrator could be between 20 and 100 but not to exceed 2850 tasks per hour. Adjust the CPU utilization further by multiplying by the ratio of 3.4 GHz relative to the planned processor speed, and multiplying by the ratio of two processors to the planned number of processors.
I/O throughput
Administration Center disk and network I/O requirements are not particularly demanding, and there is no need for sustained high I/O throughput. However, application response time suffers if network delays or disk I/O delays occur. A low latency network provides the best administrator response time. Networks that are poorly tuned, networks that are already saturated by other applications, or networks that have significantly higher latency (WANs) could significantly affect Administration Center performance.
Processing memory
The most important resource requirement for the Administration Center is memory. The maximum Java heap size is the value specified for the Administration Center server; the default is 512 MB. The largest value that can be configured for the maximum Java heap size is 1536 MB for all platforms except AIX, which allows up to 2048 MB. Thus, the Administration Center process working set memory requirement is determined by the amount of Java heap memory specified. Add the additional memory required by the operating system to any other applications on the Administration Center server. Configure the server configured with at least this much real memory. It is important that the required real memory be available. Without adequate real memory, significant response time degradation can occur as the result of system memory paging.
ActiveAdmins is the maximum number of administrators logged in at a given time. Additional administrators can be defined in the Integrated Solutions Console, but as long as they are not logged in, no additional memory is required. The number of Tivoli Storage Manager server connections defined by an administrator in the Administration Center is not an important variable in determining the Java heap size requirements, except in the sense that more servers imply that more actual work may be required. A larger maximum Java heap size provides additional memory in the case of unexpected administration activity or workload growth. However, more real memory would be required. Using a maximum Java heap size that is too small for the amount of work being run in the Administration Center causes the Java Virtual Machine (JVM) to
40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
perform garbage collection more frequently. This, in turn, causes higher processor utilization and slower application response time. In excessive conditions, memory allocation failures can result in the application being unable to perform the requested action until memory is freed up by closing work pages or logging out sessions. Tips for reducing administrator session memory requirements: v Close work pages as soon as you are finished with them. v Logout if you are not using any administrative functions for more than 30 minutes. v Do not configure the session timeout period for more than 30 minutes. Configure both the Administration Center session timeout period and the maximum memory size (Java heap size) by using the Administration Center Support Utility. This utility is located in the AC/products/tsm/bin subdirectory below the Tivoli Storage Manager installation directory. Here is an example of its usage:
/opt/tivoli/tsm/AC/products/tsm/bin # ./supportUtil.sh Administration Center Support Utility - Main Menu ================================================== 1. Manage Administration Center tracing 2. Manage the maximum memory size the Administration Center can use 3. Manage the Administration Center session timeout setting 4. Collect trace files, logs and system information to send to support 5. Generate a heap dump of the Java virtual machine 6. Generate a Java core dump of the Java virtual machine 7. View the log file for this utility 9. Exit Enter Selection: 3 Administration Center Support Utility - Manage the Session =========================================================== 1. Update the Administration Center session timeout setting 2. View the Administration Center session timeout setting 99. Return to main menu Enter Selection: 1 The session timeout setting determines how long a session can be idle before it times out. After a timeout occurs the user must log in again. The default timeout setting is 30 minutes. The minimum timeout setting is 10 minutes. To cancel this operation enter an empty value. Enter the new session timeout (minutes): 30 Updating the session timeout to 30 minutes........ Session timeout successfully updated. Restart ISC for changes to take effect. Administration Center Support Utility - Main Menu ================================================== 1. Manage Administration Center tracing 2. Manage the maximum memory size the Administration Center can use 3. Manage the Administration Center session timeout setting 4. Collect trace files, logs and system information to send to support 5. Generate a heap dump of the Java virtual machine 6. Generate a Java core dump of the Java virtual machine 7. View the log file for this utility 9. Exit Enter Selection: 2 Administration Center Support Utility - Manage the JVM ======================================================= 1. Update the maximum memory size the Administration Center can use 2. View the maximum memory size the Administration Center can use 99. Return to main menu Enter Selection: 1 The maximum memory size determines the largest amount of memory that can be used by the Administration Center. A minimum heap size of 512 MB is recommended. When used by 10 or more users, the recommendation is at least 1024 MB. To cancel
41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
this operation, enter an empty value. Enter the new JVM max memory size (MB): 1536 Updating the maximum memory size to 1536 MB...... Maximum memory size successfully updated.
Remember: Do not configure the maximum memory size (Java heap size) to be greater than the available real system memory, or significant performance degradation may occur. The Tivoli Storage Manager Administration Center Capacity Planner tool can simplify using the equations and can provide recommendations for Administration Center hardware sizings. See your IBM representative to obtain this tool.
Installation requirements
The Administration Center installation must meet the minimum hardware requirements. You can find these requirements a: http://www.ibm.com/support/ docview.wss?uid=swg21328445. If the Administration Center server is installed in an environment in which the workload is light (that is, a single administrator), then the memory requirements calculated using the information in Administration Center capacity planning on page 39 would indicate a smaller memory requirement than the minimums provided on the Web. Administration Center server performance might be acceptable when using this smaller memory amount if the workload is light. Additional memory and processing power can provide significant performance benefits in the case of unexpected demand or workload growth. If an you plan to upgrade an existing Tivoli Storage Manager server, and the existing hardware cannot meet the additional Administration Center requirements, then consider upgraded hardware or an additional system for the Administration Center function.
42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
install the Administration Center close (in network topology) to the administrators, rather than close to the Tivoli Storage Manager servers. For example, if you are in Chicago and administer Tivoli Storage Manager servers in Los Angeles, Paris, and Tokyo, the Administration Center should be installed in Chicago.
43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
44
| | | | | | | | | | | | |
system is paging memory, then reduce the amount of memory in use by the active processes, or add more real memory. v Check the amount of real memory currently in use by the Administration Center server process. Use the ps command on UNIX, or check the Windows Task Manager (Processes tab, Mem Usage column), or Windows Performance Monitor Process object, Java instance, Working Set counter. The following commands can be used on AIX to find the Administration Center server process ID and then the resident set memory (RSS) for this process:
ps -ef | grep [I]SC_Portal | awk '{ print $2 }' ps avxw PID
In addition, check the IBM support site for updates to the Integrated Solutions Console and Tivoli Storage Manager Administration Center, and for information that may describe your problem.
45
46
Protocol functions
The protocol functions can be categorized as the following: v Reliable delivery
Copyright IBM Corp. 1996, 2009
47
v v v v
Packet assembly and disassembly Connection control Flow control Error control
Reliable Delivery
Reliable delivery services guarantee to deliver a stream of data sent from one machine to another without duplication or loss of data. The reliable protocols use a technique called acknowledgment with retransmission, which requires the recipient to communicate with the source, sending back an acknowledgment after it receives data.
48
Sliding window
The sliding window allows TCP/IP to use communication channels efficiently, in terms of both flow control and error control. The sliding window is controlled in Tivoli Storage Manager through the TCPWINDOWSIZE option. To achieve reliability of communication, the sender sends a packet and waits until it receives an acknowledgment before transmitting another. The sliding window protocol enables the sender to transmit multiple packets before waiting for an acknowledgment. The advantages are: v Simultaneous communication in both directions. v Better utilization of network bandwidth, especially if there are large transmission delays. v Traffic flow with reverse traffic data, known as piggybacking. This reverse traffic might or might not have anything to with the acknowledgment that is riding on it. v Variable window size over time. Each acknowledgment specifies how many octets have been received and contains a window advertisement that specifies how many additional octets of data the receiver is prepared to accept, that is, the receivers current buffer size. In response to decreasing window size, the sender decreases the size of its window. Advantages of using variable window sizes are flow control and reliable transfers. Tip: A client continually shrinking its window size is an indication that the client cannot handle the load. Increasing the window size does not improve performance.
Networks
Tuning your networks can provide significant performance improvements. There is a variety of actions you can take to tune your networks. v Use dedicated networks for backup (LAN or SAN). v Keep device drivers updated. v Using Ethernet adapter auto detect to set the speed and duplex generally works well with newer adapters and switches. If your network hardware is more than three years old and backup and restore network performance is not as expected, set the speed and duplex to explicit values (for example, 100 MB full-duplex, 100 MB half-duplex, and so on). Make sure that all connections to the same switch are set to the same values. v Gb Ethernet jumbo frames (9000 bytes) can give improved throughput and lower host CPU usage especially for larger files. Jumbo frames are only available if they are supported on client, server, and switch. Not all Gb Ethernet hardware supports jumbo frames. v In networks with mixed frame-size capabilities (for example, standard Ethernet frames of 1500 bytes and jumbo Ethernet frames of 9000 bytes) it can be advantageous to enable path maximum transmission unit (PMTU) discovery on the systems. Doing so means that each system segments the data sent into frames appropriate to the session partners. Those that are fully capable of jumbo frames use jumbo frames. Those that have lower capabilities automatically use the largest frames that do not cause frame fragmentation and re-assembly somewhere in the network path. Avoiding fragmentation is important in optimizing the network.
49
50
If the number is greater than 0, overflows have occurred. At the device driver layer, the mbuf chain containing the data is put on the transmit queue, and the adapter is signaled to start the transmission operation. On the receive side, packets are received by the adapter and then are queued on the driver-managed receive queue. The adapter transmit and receive queue sizes can be configured using the System Management Interface Tool (SMIT). At the device driver layer, both the transmit and receive queues are configurable. It is possible to overrun these queues. To determine this use netstat -v command, which shows Max Transmits Queued and Max receives Queued.
Note: Jumbo frames can be enabled on Gigabit Ethernet and 10 Gigabit Ethernet adapters. Doing so raises the MTU to 9000 bytes. Because there is less overhead per packet, jumbo frames typically provide better performance, or CPU consumption, or both. Consider jumbo frames especially if you have a network dedicated to backup tasks. Jumbo frames should only be considered if all equipment between most of your Tivoli Storage Manager clients and server supports jumbo frames, including routers and switches. You can override the default MSS in the following three ways: 1. Specify a static route to a specific remote network and use the -mtu option of the route command to specify the MTU to that network. Disadvantages of this approach are: v It does not work with dynamic routing. v It is impractical when the number of remote networks increases. v Routes must be set at both ends to negotiate a value larger than a default MSS. 2. Use the tcp_mssdflt option of the no command to change the default value of MSS. This is a system wide change. In a multi-network environment with multiple MTUs, the value specified to override the MSS default should be the minimum MTU value (of all specified MTUs) less 40. In an environment with a large default MTU, this approach has the advantage that MSS does not need to be set on a per-network basis. The disadvantages are:
51
v Increasing the default can lead to IP router fragmentation if the destination is on a remote network, and the MTUs of intervening networks is not known. v The tcp_mssdflt parameter must be set to the same value on the destination host. 3. Subnet and set the subnetsarelocal option of the no command. Several physical networks can be made to share the same network number by subnetting. The subnetsarelocal option specifies, on a system-wide basis, whether subnets are to be considered local or remote networks. With subnetsarelocal=1 (the default), Host A on subnet 1 considers Host B on subnet 2 to be on the same physical network. The consequence of this is that when Host A and Host B establish connection , they negotiate the MSS assuming they are on the same network. This approach has the following advantages: v It does not require any static bindings MSS is automatically negotiated. v It does not disable or override the TCP MSS negotiation so that small differences in the MTU between adjacent subnets can be handled appropriately. The disadvantages are: v Potential IP router fragmentation when two high-MTU networks are linked through a lower-MTU network. v Source and destination networks must both consider subnets to be local. In an SP2 environment with a high speed switch, use an MTU of 64 KB AIX - no (network options) - You can configure the network option parameters by using the no command. v Use no -a to view current settings. v When using TCP window sizes 64, set rfc1323 to 1. v If you see non-zero No mbuf errors in entstat, fddistat, or atmstat, raise thewall. v Set thewall to at least 131072 and sb_max to at least 1310720. Newer versions of AIX have larger defaults. v Because the settings for the no command do not survive reboot, use the -p option. v Recommended changes: no -o rfc1323=1 Here are the recommended values for the parameters described in this section.
lowclust = 200 lowmbuf = 400 thewall = 131072 mb_cl_hiwat = 1200 sb_max = 1310720 rfc1323 = 1
52
TcpMSSinternetlimit
When data travels to a remote network or a different subnet, the TCPIP.NLM sets the MTU size to the default maximum segment size (MSS) value of 536 bytes. The TcpMSSinternetlimit parameter can be used to override the default MSS value and to set a larger MTU. For NetWare v4.x with TCP/IP v3.0, setting TcpMSSinternetlimit off in SYS:\ETC\TCPIP.CFG causes the TCPIP. NLM to use the MTU value specified in STARTUP.NCF file (maximum physical receive packet size). Important: The TcpMSSinternetlimit parameter is case sensitive. If this parameter is not specified correctly, it will be dropped automatically from the tcpip.cfg file by NetWare.
53
For NetWare v3.x, Novell patch TCP31A.EXE (for TCP/IP v2.75) can provide the same option.
54
TCPIP.DATA
TCPIP.DATA contains hostname, domainorigin, nsinteraddr, and so on. The content of TCPIP.DATA is the same as for previous releases of TCP/IP for z/OS. For a sample TCPIP.DATA, see the IP Configuration manual or see the sample provided with the product. One important recommendation is to keep the statement TRACE RESOLVER commented out to avoid complete tracing of all name queries. This trace should be used for debugging purposes only.
PROFILE.TCPIP
During initialization of the TCPIP stack, configuration parameters for the stack are read from the PROFILE.TCPIP configuration data set. Reference the z/OS IP Configuration manual for additional information on the parameters that are used in this file. The PROFILE.TCPIP contains TCP buffer sizes, LAN controller definitions, ports, home IP addresses, gateway statements, VTAM LUs for Telnet use, and so on. The TCPWINDOWSIZE client option allows you to set the TCP/IP send and receive buffers independently from TCP/IP. The default size is 63 KB. Therefore, you only need to set the TCP/IP profile TCPMAXRCVBUFRSIZE parameter to a value equal to or larger than the value you want for the client TCPWINDOWSIZE option. You can set the TCPSENDBFRSIZE and TCPRCVBUFRSIZE parameters to values appropriate for the non-Tivoli Storage Manager network workloads on the system, because these parameters are overridden by the client TCPWINDOWSIZE
Chapter 5. Network protocol tuning
55
option. When send/recv buffer sizes are not specified in the PROFILE, a default size of 16 KB is used for send/recv buffers.
IPCONFIG PATHMTUDISCOVERY TCPCONFIG TCPMAXRCVBUFRSIZE 524288 TCPSENDBFRSIZE 65535 TCPRCVBUFRSIZE 65535
Note: The FTP server and client application override the default settings and use 64 KB-1 as the TCP window size and 180 KB bytes for send/recv buffers. Therefore, there is no change required in the TCPCONFIG statement for FTP server and client.
56
Accessibility features
The following list includes the major accessibility features in Tivoli Storage Manager: v Keyboard-only operation v Interfaces that are commonly used by screen readers v Keys that are discernible by touch but do not activate just by touching them v Industry-standard devices for ports and connectors v The attachment of alternative input and output devices v User documentation provided in HTML and PDF format. Descriptive text is provided for all documentation images. The Tivoli Storage Manager Information Center, and its related publications, are accessibility-enabled.
Keyboard navigation
Windows The Tivoli Storage Manager for Windows Console follows Microsoft conventions for all keyboard navigation and access. Drag and Drop support is managed using the Microsoft Windows Accessibility option known as MouseKeys. For more information about MouseKeys and other Windows accessibility options, please refer to the Windows Online Help (keyword: MouseKeys). AIX Tivoli Storage Manager follows AIX operating system conventions for keyboard navigation and access.
Tivoli Storage Manager follows HP-UX operating-system conventions for keyboard navigation and access.
HP UX Linux Tivoli Storage Manager follows Linux operating-system conventions for keyboard navigation and access. Mac OS X Tivoli Storage Manager follows Macintosh operating-system conventions for keyboard navigation and access.
Tivoli Storage Manager follows Sun Solaris operating-system conventions for keyboard navigation and access.
Solaris
Vendor software
Tivoli Storage Manager includes certain vendor software that is not covered under the IBM license agreement. IBM makes no representation about the accessibility features of these products. Contact the vendor for the accessibility information about its products.
Copyright IBM Corp. 1996, 2009
57
58
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the users responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: IBM World Trade Asia Corporation Licensing 2-31 Roppongi 3-chome, Minato-ku Tokyo 106-0032, Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.
59
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation 2Z4A/101 11400 Burnet Road Austin, TX 78758 U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this information and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. If you are viewing this information in softcopy, the photographs and color illustrations may not appear.
60
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at Copyright and trademark information at http://www.ibm.com/legal/ copytrade.shtml. Adobe is either a registered trademark or trademark of Adobe Systems Incorporated in the United States, other countries, or both. Intel and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, or service names may be trademarks or service marks of others.
Notices
61
62
Glossary
A glossary is available with terms and definitions for the IBM Tivoli Storage Manager server and related products. The glossary is located in the Tivoli Storage Manager Version 6.1 information center: http://publib.boulder.ibm.com/infocenter/tsminfo/v6
63
64
Index Numerics
64-bit ACSLS xii client options (continued) command line only (continued) INCRBYDATE 35 COMMMETHOD SHAREDMEM 20, 35 COMMRESTARTDURATION 25 COMMRESTARTINTERVAL 25 COMPRESSALWAYS 25, 26 COMPRESSION 26 DISKBUFFSIZE 27 ENABLELANFREE 27 MEMORYEFFICIENTBACKUP 27 PROCESSORUTILIZATION 28 QUIET 29 RESOURCEUTILIZATION 29, 33, 36 TAPEPROMPT 31 TCPBUFFSIZE 31 TCPNODELAY 31 TCPWINDOWSIZE 17, 20, 32, 54 TXNBYTELIMIT 17, 23, 32 VIRTUALMOUNTPOINT 27 VIRTUALNODENAME 36 Windows 35 collocation 22 COMMMETHOD SHAREDMEM client option 20, 35 COMMMETHOD SHAREDMEM server option 20, 35 COMMRESTARTDURATION client option 25 COMMRESTARTINTERVAL client option 25 COMPRESSALWAYS client option 25, 26 compression enabling on tape drives 21 COMPRESSION client option 26 configuration parameters self-tuning memory 10 customer support contact ix direct I/O (continued) Sun Solaris 19 disaster recovery 12 disk performance considerations 24 write cache 24 disk I/O CPU 10 DISKBUFFSIZE client option 27 DISKSTGPOOLMEMSIZE server option 4 dsm.opt file 25 dsm.sys file 25 DSMMIGRATE client command 37 DSMSERV INSERTDB preview xii
A
accessibility features 57 ACSLS xvi active log mirroring 24 adapters per fiber HBA 24 Administration Center estimating requirements 39 I/O throughput 40 installing 42 Java heap memory size 40 location 42 memory usage 43 number of administrators 39 optimizing Windows Server 2003 memory 43 performance tuning 39 processing capacity 39 processing memory 40 setup 42 tuning 44 memory performance 44 network performance 44 processor performance 44 AIX ioo command 17 performance recommendations 17 server and client TCP/IP tuning 50 Virtual Address space 17 vmo command 17 antivirus software 35
E
education see Tivoli technical training vii ENABLELANFREE client option 27 environment file modifying reporting performance 15 Ethernet adapters 49 EXPINTERVAL server option 4, 11 EXPIRE INVENTORY server command 11 export 12
F
fixes, obtaining viii FROMNODE option 36
B
backup LAN-free 12 operations 11 performance 11 throughput 29 BACKUP DB server command 9 BEGINROUTES/ENDROUTES block busses multiple PCI 24
G
Gb Ethernet jumbo frames 49
H D
55 Data Protection for Domino for z/OS 37 database performance 9 database manager 10 database performance 10 DBMEMPERCENT server option 3 deduplication in FILE storage pools 14 DEFINE COPYGROUP server command 11 DEFINE DEVCLASS server command 21 DEFINE STGPOOL server command 13, 14, 22 device drivers xii, 49 direct I/O AIX 19 hardware server 8 Hierarchical Storage Manager migration 37
I
IBM Software Support submitting a problem x IBM Support Assistant viii import 12 INCLUDE/EXCLUDE lists 36 installing Administration Center 42 Internet, searching for problem resolution vii, viii inventory expiration 11 ioo command 17
C
cache size 4 cached disk storage pools client tuning options 25 client commands DSMMIGRATE 37 client options 35 command line only IFNEWER 35 13
65
J
Journal File System 17 journal-based backup Windows 35
P
problem determination describing problem for IBM Software Support x determining business impact for IBM Software Support ix submitting a problem to IBM Software x PROCESSORUTILIZATION client option 28 PROFILE.TCPIP configuration data set 55 publications download v order v search v Tivoli Storage Manager v
K
knowledge bases, searching vii
L
LAN-free backup 12 licensing xi Linux servers performance recommendations log performance 9 logical storage pools 13 19
Q
QUIET client option 29
M
Macintosh client antivirus software 35 Extended Attributes 35 manage operations 4 maximum segment size (MSS) 51 maximum transmission unit (MTU) 51 NetWare 53 Maximum Transmission Unit (MTU) 31 MAXNUMMP server option 13, 29, 33 MAXSESSIONS server option 5, 29, 33 MEMORYEFFICIENTBACKUP client option 27 migration Hierarchical Storage Manager 37 processes 14 thresholds 14 mount points, virtual 36 MOVE DATA command 5 MOVEBATCHSIZE server option 5, 23 MOVESIZETHRESH server option 5, 23 multi-client backups and restores 36 multiple client sessions 29 multiple session backup and restore 33
R
RAID arrays 9, 24 raw logical volumes 17 advantages and disadvantages 19 raw partitions 17, 19 recommended values by platform 35 REGISTER NODE server command 33 RESOURCEUTILIZATION client option 29, 33, 36 RESTOREINTERVAL server option 6
S
SAN discovery zSeries xiii SANDISCOVERYTIMEOUT server option xiii scheduling processes 12 sessions 12 server hardware 8 server activity log searching 12 server commands 13 BACKUP DB 9 database manager 3 DEFINE COPYGROUP 11 DEFINE DEVCLASS 21 DEFINE STGPOOL 13, 14, 22 EXPIRE INVENTORY 11 REGISTER NODE 33 SET MAXCMDRETRIES 50 SET QUERYSCHEDPERIOD 50 SET RETRYPERIOD 50 storage pools 4 UPDATE COPYGROUP 11 UPDATE NODE 29, 33 UPDATE STGPOOL 14, 22 server options 3 best performance settings by platform 16
server options (continued) COMMMETHOD SHAREDMEM 20, 35 DBMEMPERCENT 3 DISKSTGPOOLMEMSIZE 4 EXPINTERVAL 4, 11 MAXNUMMP 13, 29, 33 MAXSESSIONS 5, 29, 33 MOVEBATCHSIZE 5, 23 MOVESIZETHRESH 5, 23 RESTOREINTERVAL 6 SANDISCOVERYTIMEOUT xiii TCPNODELAY 6, 15 TCPWINDOWSIZE 6, 17, 20 TXNBYTELIMIT 7, 23 TXNGROUPMAX 7, 17, 23, 32 server tuning overview 1 SET MAXCMDRETRIES server command 50 SET QUERYSCHEDPERIOD server command 50 SET RETRYPERIOD server command 50 sliding window 32 Software Support contact ix describing problem for IBM Software Support x determining business impact for IBM Software Support ix storage volumes 13 Storage Agent 15 storage pool backup and restore 12 migrating files 14 migration 14 storage pools cached disk 13 Sun Solaris server and client TCP/IP tuning 54 server performance recommendations 19 TCPWINDOWSIZE client option 54 support information vii system memory 3
N
NetWare client cache tuning 53 networks dedicated 49 for backup 49 protocol tuning 47 settings AIX 50 Sun Solaris 54 z/OS 55 traffic 50 NTFS file compression 20 NTFS file system 20
T
tape drives cleaning 21 compression 21 on a SCSI bus 24 required number 21 streaming rate 23 transfer rate 23 TAPEPROMPT client option 31 TCP communication buffer 31 TCP/IP AIX server and client tuning 50 concepts 47 connection availability 47 data transfer block size 47 error control 47 flow control 47 functional groups application layer 47 internetwork layer 47
66
TCP/IP (continued) functional groups (continued) network layer 47 transport layer 47 HP-UX server and client tuning 19 maximum segment size (MSS) 51 maximum transmission unit (MTU) 51 NetWare client cache tuning 53 maximum transmission unit (MTU) 53 packet assembly and disassembly 47 sliding window 32, 49 Sun Solaris server and client tuning 54 tuning 47 window values 47 z/OS tuning 55 TCP/IP and z/OS UNIX system services performance tuning 56 TCPBUFFSIZE client option 31 TCPIP.DATA 55 TCPNODELAY client option 31 TCPNODELAY server option 6, 15 TCPWINDOWSIZE client option 17, 20, 32, 54 TCPWINDOWSIZE server option 6, 17, 20 thresholds migration 14 throughput estimating in untested environments 21 Tivoli technical training vii training, Tivoli technical vii transaction size 32 tsmdlst utility xii tuning 3 TXNBYTELIMIT client option 17, 23, 32 TXNBYTELIMIT server option 7, 23 TXNGROUPMAX server option 7, 17, 23, 32
W
Windows journal-based backup 35 performance recommendations workload requirements 10 20
Z
z/OS client TCP/IP tuning 55
U
UFS file system volumes 19 UNIX file systems advantages and disadvantages 19 UPDATE COPYGROUP server command 11 UPDATE NODE server command 29, 33 UPDATE STGPOOL server command 14, 22
V
Virtual Memory Manager 17, 50 VIRTUALMOUNTPOINT client option 27 VIRTUALNODENAME client option vmo command 17 VxFS file system 19 36
Index
67
68
Printed in USA
GC23-9788-01