GPFS 4.1.0.5
GPFS 4.1.0.5
Documentation Update:
GPFS Version 4 Release 1.0.5 for Linux
on System z
(Applied to GPFS Version 4 Release
1.0.4 Information Units)
General Parallel File System
Version 4 Release 1.0.5
Documentation Update:
GPFS Version 4 Release 1.0.5 for Linux
on System z
(Applied to GPFS Version 4 Release
1.0.4 Information Units)
ii GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Contents
Figures . . . . . . . . . . . . . . . v Assign mount command options . . . . . . 29
Enabling quotas . . . . . . . . . . . . 29
Tables . . . . . . . . . . . . . . . vii Enable DMAPI . . . . . . . . . . . . 30
Verifying disk usage . . . . . . . . . . 30
Changing the file system format to the latest
December 2014 . . . . . . . . . . . 1 level . . . . . . . . . . . . . . . . 31
Enabling file system features . . . . . . . 31
Planning for GPFS . . . . . . . . . . 3 Specifying whether the df command will report
Hardware requirements. . . . . . . . . . . 3 numbers based on quotas for the fileset . . . . 31
Software requirements . . . . . . . . . . . 4 Specifying the maximum number of files that can
GPFS product structure . . . . . . . . . . . 4 be created . . . . . . . . . . . . . . 31
Recoverability considerations . . . . . . . . . 5 Controlling the order in which file systems are
Node failure . . . . . . . . . . . . . 5 mounted . . . . . . . . . . . . . . 32
Network Shared Disk server and disk failure . . 8 A sample file system creation . . . . . . . 32
Reduced recovery time using Persistent Reserve 10
GPFS cluster creation considerations . . . . . . 11 Installing GPFS on Linux nodes . . . . 35
GPFS node adapter interface names . . . . . 11 Preparing the environment on Linux nodes. . . . 35
Nodes in your GPFS cluster . . . . . . . . 12 Installing the GPFS software on Linux nodes . . . 36
GPFS cluster configuration servers. . . . . . 13 Accepting the electronic license agreement on
Remote shell command . . . . . . . . . 13 Linux nodes . . . . . . . . . . . . . 36
Remote file copy command . . . . . . . . 14 Extracting the GPFS software on Linux nodes . . 36
Cluster name . . . . . . . . . . . . . 14 Extracting GPFS patches (update SLES or Red
User ID domain for the cluster . . . . . . . 14 Hat Enterprise Linux RPMs or Debian Linux
Starting GPFS automatically . . . . . . . . 14 packages) . . . . . . . . . . . . . . 38
Cluster configuration file . . . . . . . . . 15 Installing the GPFS man pages on Linux nodes 38
GPFS license designation . . . . . . . . . . 15 Installing the GPFS software packages on Linux
Disk considerations . . . . . . . . . . . . 15 nodes . . . . . . . . . . . . . . . 38
Network Shared Disk (NSD) creation Verifying the GPFS installation on SLES and Red
considerations . . . . . . . . . . . . 17 Hat Enterprise Linux nodes . . . . . . . . 39
NSD server considerations . . . . . . . . 18 Verifying the GPFS installation on Debian Linux
File system descriptor quorum . . . . . . . 19 nodes . . . . . . . . . . . . . . . 40
Preparing direct access storage devices (DASD) Building the GPFS portability layer on Linux nodes 40
for NSDs . . . . . . . . . . . . . . 19 Using the automatic configuration tool to build
File system creation considerations . . . . . . 20 the GPFS portability layer on Linux nodes . . . 40
Device name of the file system . . . . . . . 24 For Linux on System z: Changing the kernel settings 41
NFS V4 deny-write open lock . . . . . . . 24
Disks for your file system . . . . . . . . 24
Accessibility features for GPFS . . . . 43
Deciding how the file system is mounted . . . 24
Accessibility features . . . . . . . . . . . 43
Block size . . . . . . . . . . . . . . 25
Keyboard navigation . . . . . . . . . . . 43
atime values . . . . . . . . . . . . . 26
IBM and accessibility . . . . . . . . . . . 43
mtime values . . . . . . . . . . . . . 26
Block allocation map . . . . . . . . . . 26
File system authorization . . . . . . . . . 27 Notices . . . . . . . . . . . . . . 45
Strict replication . . . . . . . . . . . . 27 Trademarks . . . . . . . . . . . . . . 46
Internal log file . . . . . . . . . . . . 27
File system replication parameters . . . . . . 27 Glossary . . . . . . . . . . . . . . 49
Number of nodes mounting the file system. . . 28
Windows drive letter . . . . . . . . . . 28 Index . . . . . . . . . . . . . . . 55
Mountpoint directory . . . . . . . . . . 29
Within this documentation update, a vertical line (|) to the left of the text indicates technical changes or
additions made to the previous edition of the information.
Specifically, this Documentation Update provides new and changed information for the following chapters
of the previously-published GPFS Version 4 Release 1.0.4 Concepts, Planning, and Installation Guide:
v “Planning for GPFS” on page 3
v “Installing GPFS on Linux nodes” on page 35
During configuration, GPFS requires you to specify several operational parameters that reflect your
hardware resources and operating environment. During file system creation, you can specify parameters
that are based on the expected size of the files or you can let the default values take effect.
Hardware requirements
You can validate that your hardware meets GPFS requirements by taking the steps outlined in this topic.
1. Consult the GPFS FAQ in IBM Knowledge Center (www.ibm.com/support/knowledgecenter/
SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html) for the latest list of:
v Supported server hardware
v Tested disk configurations
v Maximum cluster size
2. Provide enough disks to contain the file system. Disks can be:
v SAN-attached to each node in the cluster
v Attached to one or more NSD servers
v A mixture of directly-attached disks and disks that are attached to NSD servers
Refer to “Network Shared Disk (NSD) creation considerations” on page 17 for additional information.
3. When doing network-based NSD I/O, GPFS passes a large amount of data between its daemons. For
NSD server-to-client traffic, it is suggested that you configure a dedicated high-speed network solely
for GPFS communications when the following are true:
v There are NSD disks configured with servers providing remote disk capability.
v Multiple GPFS clusters are sharing data using NSD network I/O.
Refer to the GPFS: Advanced Administration Guide for additional information.
GPFS communications require static IP addresses for each GPFS node. IP address takeover operations that
transfer the address to another computer are not allowed for the GPFS network. Other IP addresses
within the same computer that are not used by GPFS can participate in IP takeover. To provide
availability or additional performance, GPFS can use virtual IP addresses created by aggregating several
network adapters using techniques such as EtherChannel or channel bonding.
Note: All nodes in a cluster must have the same edition installed.
4 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Recoverability considerations
Sound file system planning requires several decisions about recoverability. After you make these
decisions, GPFS parameters enable you to create a highly-available file system with rapid recovery from
failures.
v At the disk level, consider preparing disks for use with your file system by specifying failure groups
that are associated with each disk. With this configuration, information is not vulnerable to a single
point of failure. See “Network Shared Disk (NSD) creation considerations” on page 17.
v At the file system level, consider replication through the metadata and data replication parameters. See
“File system replication parameters” on page 27.
Additionally, GPFS provides several layers of protection against failures of various types:
1. “Node failure”
2. “Network Shared Disk server and disk failure” on page 8
3. “Reduced recovery time using Persistent Reserve” on page 10
Node failure
In the event of a node failure, GPFS:
v Prevents the continuation of I/O from the failing node
v Replays the file system metadata log for the failed node
GPFS prevents the continuation of I/O from a failing node through a GPFS-specific fencing mechanism
called disk leasing. When a node has access to file systems, it obtains disk leases that allow it to submit
I/O. However, when a node fails, that node cannot obtain or renew a disk lease. When GPFS selects
another node to perform recovery for the failing node, it first waits until the disk lease for the failing
node expires. This allows for the completion of previously submitted I/O and provides for a consistent
file system metadata log. Waiting for the disk lease to expire also avoids data corruption in the
subsequent recovery step.
To reduce the amount of time it takes for disk leases to expire, you can use Persistent Reserve (SCSI-3
protocol). If Persistent Reserve (configuration parameter: usePersistentReserve) is enabled, GPFS prevents
the continuation of I/O from a failing node by fencing the failed node using a feature of the disk
subsystem called Persistent Reserve. Persistent Reserve allows the failing node to recover faster because
GPFS does not need to wait for the disk lease on the failing node to expire. For additional information,
refer to “Reduced recovery time using Persistent Reserve” on page 10. For further information about
recovery from node failure, see the GPFS: Problem Determination Guide.
File system recovery from node failure should not be noticeable to applications running on other nodes.
The only noticeable effect may be a delay in accessing objects that were being modified on the failing
node when it failed. Recovery involves rebuilding metadata structures which may have been under
modification at the time of the failure. If the failing node is acting as the file system manager when it
fails, the delay will be longer and proportional to the level of activity on the file system at the time of
failure. In this case, the failover file system management task happens automatically to a surviving node.
Quorum
GPFS uses a cluster mechanism called quorum to maintain data consistency in the event of a node
failure.
Quorum operates on the principle of majority rule. This means that a majority of the nodes in the cluster
must be successfully communicating before any node can mount and access a file system. This keeps any
nodes that are cut off from the cluster (for example, by a network failure) from writing data to the file
system.
GPFS quorum must be maintained within the cluster for GPFS to remain active. If the quorum semantics
are broken, GPFS performs recovery in an attempt to achieve quorum again. GPFS can use one of two
methods for determining quorum:
v Node quorum
v Node quorum with tiebreaker disks
Node quorum: Node quorum is the default quorum algorithm for GPFS. With node quorum:
v Quorum is defined as one plus half of the explicitly defined quorum nodes in the GPFS cluster.
v There are no default quorum nodes; you must specify which nodes have this role.
For example, in Figure 1, there are three quorum nodes. In this configuration, GPFS remains active as
long as there are two quorum nodes available.
Quorum nodes
/gpfs1
GPFS cluster
Node quorum with tiebreaker disks: When running on small GPFS clusters, you might want to have
the cluster remain online with only one surviving node. To achieve this, you need to add a tiebreaker
disk to the quorum configuration. Node quorum with tiebreaker disks allows you to run with as little as
one quorum node available as long as you have access to a majority of the quorum disks (refer to
Figure 2 on page 8). Enabling node quorum with tiebreaker disks starts by designating one or more nodes
as quorum nodes. Then one to three disks are defined as tiebreaker disks using the tiebreakerDisks
parameter on the mmchconfig command. You can designate any disk to be a tiebreaker.
When utilizing node quorum with tiebreaker disks, there are specific rules for cluster nodes and for
tiebreaker disks.
6 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
5. If a network connection fails, which causes the loss of quorum, and quorum is maintained by
tiebreaker disks, the following rationale is used to re-establish quorum. If a group has the cluster
manager, it is the “survivor”. The cluster manager can give up its role if it communicates with fewer
than the minimum number of quorum nodes as defined by the minQuorumNodes configuration
parameter. In this case, other groups with the minimum number of quorum nodes (if they exist) can
choose a new cluster manager.
When using the cluster configuration repository (CCR) to store configuration files, the total number of
quorum nodes is limited to eight, regardless of quorum semantics, but the use of tiebreaker disks can be
enabled or disabled at any time by issuing an mmchconfig tiebreakerDisks command. The change will
take effect immediately, and it is not necessary to shut down GPFS when making this change.
When using the traditional server-based (non-CCR) configuration repository, it is possible to define more
than eight quorum nodes, but only when no tiebreaker disks are defined:
1. To configure more than eight quorum nodes under the server-based (non-CCR) configuration
repository, you must disable node quorum with tiebreaker disks and restart the GPFS daemon. To
disable node quorum with tiebreaker disks:
a. Issue the mmshutdown -a command to shut down GPFS on all nodes.
b. Change quorum semantics by issuing mmchconfig tiebreakerdisks=no.
c. Add additional quorum nodes.
d. Issue the mmstartup -a command to restart GPFS on all nodes.
2. If you remove quorum nodes and the new configuration has less than eight quorum nodes, you can
change the configuration to node quorum with tiebreaker disks. To enable quorum with tiebreaker
disks:
a. Issue the mmshutdown -a command to shut down GPFS on all nodes.
b. Delete the appropriate quorum nodes or run mmchnode --nonquorum to drop them to a client.
c. Change quorum semantics by issuing the mmchconfig tiebreakerdisks="diskList" command.
v The diskList contains the names of the tiebreaker disks.
v The list contains the NSD names of the disks, preferably one or three disks, separated by a
semicolon (;) and enclosed by quotes.
d. Issue the mmstartup -a command to restart GPFS on all nodes.
In Figure 2 on page 8 GPFS remains active with the minimum of a single available quorum node and two
available tiebreaker disks.
3 NSDs defined as
tiebreaker disks
/gpfs1
GPFS cluster
When a quorum node detects loss of network connectivity, but before GPFS runs the algorithm that
decides if the node will remain in the cluster, the tiebreakerCheck event is triggered. This event is
generated only in configurations that use quorum nodes with tiebreaker disks. It is also triggered on the
cluster manager periodically by a challenge-response thread to verify that the node can still continue as
cluster manager.
In the event of a disk failure in which GPFS can no longer read or write to the disk, GPFS will
discontinue use of the disk until it returns to an available state. You can guard against loss of data
availability from disk failure by:
8 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
v Utilizing hardware data protection as provided by a Redundant Array of Independent Disks (RAID)
device (see Figure 3)
/gpfs1
Storage with dual
RAID controllers
Figure 3. An example of a highly available SAN configuration for a GPFS file system
v Utilizing the GPFS data and metadata replication features (see Increased data availability) along with
the designation of failure groups (see “Network Shared Disk (NSD) creation considerations” on page
17 and Figure 4)
/gpfs1
It is suggested that you consider RAID as the first level of redundancy for your data and add GPFS
replication if you desire additional protection.
In the event of an NSD server failure in which a GPFS client can no longer contact the node that provides
remote access to a disk, GPFS discontinues the use of the disk. You can guard against loss of an NSD
server availability by using common disk connectivity on multiple NSD server nodes and specifying
multiple Network Shared Disk servers for each common disk.
Note: In the event that a path to a disk fails, GPFS reports a disk failure and marks the disk down. To
bring the disk back online, first follow the directions supplied by your storage vendor to determine and
repair the failure.
You can guard against loss of data availability from failure of a path to a disk by doing the following:
v Creating multiple NSD servers for each disk
Planning for GPFS 9
As GPFS determines the available connections to disks in the file system, it is recommended that you
always define more than one NSD server for each disk. GPFS allows you to define up to eight NSD
servers for each NSD. In a SAN configuration where NSD servers have also been defined, if the
physical connection is broken, GPFS dynamically switches to the next available NSD server (as defined
on the server list) and continues to provide data. When GPFS discovers that the path has been
repaired, it moves back to local disk access. This is the default behavior, which can be changed by
designating file system mount options. For example, if you never want a node to use the NSD server
path to a disk, even if the local path fails, you can set the -o useNSDserver mount option to never.
You can set the mount option using the mmchfs, mmmount, mmremotefs, and mount commands.
Important: In Linux on System z®, it is mandatory to have multiple paths to one SCSI disk (LUN) to
avoid single path of failure. The coalescing of the paths to one disk is done by the kernel (via the
device-mapper component). As soon as the paths are coalesced, a new logical, multipathed device is
created, which is used for any further (administration) task. (The single paths can no longer be used.)
The multipath device interface name depends on the distribution and is configurable:
SUSE /dev/mapper/Unique_WW_Identifier
For example: /dev/mapper/36005076303ffc56200000000000010cc
Red Hat
/dev/mapper/mpath*
To obtain information about a multipathed device, use the multipath tool as shown in the following
example:
# multipath -ll
10 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
You must explicitly enable PR using the usePersistentReserve option of the mmchconfig command. If
you set usePersistentReserve=yes, GPFS attempts to set up PR on all of the PR capable disks. All
subsequent NSDs are created with PR enabled if they are PR capable. However, PR is only supported in
the home cluster. Therefore, access to PR-enabled disks from another cluster must be through an NSD
server that is in the home cluster and not directly to the disk (for example, through a SAN).
Table 1 details the cluster creation options, how to change the options, and the default values for each
option.
Table 1. GPFS cluster creation options
Options Command to change the option Default value
“Nodes in your GPFS cluster” on page 12 mmaddnode
None
mmdelnode
Node designation: manager or client, see
mmchnode client
“Nodes in your GPFS cluster” on page 12
Node designation: quorum or nonquorum,
see “Nodes in your GPFS cluster” on page mmchnode nonquorum
12
Primary cluster configuration server, see
“GPFS cluster configuration servers” on mmchcluster None
page 13
Secondary cluster configuration server, see
“GPFS cluster configuration servers” on mmchcluster None
page 13
“Remote shell command” on page 13 mmchcluster /usr/bin/rsh
“Remote file copy command” on page 14 mmchcluster /usr/bin/rcp
“Cluster name” on page 14 The node name of the primary
mmchcluster
GPFS cluster configuration server
GPFS administration adapter port name, Same as the GPFS
see “GPFS node adapter interface names” mmchnode communications adapter port
name
GPFS communications adapter port name,
mmchnode None
see “GPFS node adapter interface names”
“User ID domain for the cluster” on page
mmchconfig The name of the GPFS cluster
14
“Starting GPFS automatically” on page 14 mmchconfig no
“Cluster configuration file” on page 15 Not applicable None
These names can be specified by means of the node descriptors passed to the mmaddnode or
mmcrcluster command and can later be changed with the mmchnode command.
If multiple adapters are available on a node, this information can be communicated to GPFS by means of
the subnets parameter on the mmchconfig command.
12 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Follow these rules when adding nodes to your GPFS cluster:
v While a node may mount file systems from multiple clusters, the node itself may only reside in a
single cluster. Nodes are added to a cluster using the mmcrcluster or mmaddnode command.
v The nodes must be available when they are added to a cluster. If any of the nodes listed are not
available when the command is issued, a message listing those nodes is displayed. You must correct
the problem on each node and then issue the mmaddnode command to add those nodes.
v Designate at least one but not more than seven nodes as quorum nodes. When not using tiebreaker
disks, you can designate more quorum nodes, but it is recommended to use fewer than eight if
possible. It is recommended that you designate the cluster configuration servers as quorum nodes.
How many quorum nodes altogether you will have depends on whether you intend to use the node
quorum with tiebreaker algorithm or the regular node based quorum algorithm. For more details, see
“Quorum” on page 5.
The default remote shell command is rsh. You can designate the use of a different remote shell command
by specifying its fully-qualified path name on the mmcrcluster command or the mmchcluster command.
The remote shell command must adhere to the same syntax as rsh, but it can implement an alternate
authentication mechanism.
Clusters that include both UNIX and Windows nodes must use ssh for the remote shell command. For
more information, see Installing and configuring OpenSSH.
Clusters that only include Windows nodes may use the mmwinrsh utility that comes with GPFS. The
fully-qualified path name is /usr/lpp/mmfs/bin/mmwinrsh. For more information about configuring
Windows GPFS clusters, see the topic that discusses the mmwinservctl command in the GPFS:
Administration and Programming Reference.
By default, you can issue GPFS administration commands from any node in the cluster. Optionally, you
can choose a subset of the nodes that are capable of running administrative commands. In either case, the
nodes that you plan to use for administering GPFS must be able to run remote shell commands on any
other node in the cluster as user root without the use of a password and without producing any
extraneous messages.
For additional information, see the topic that discusses requirements for administering a GPFS file system
in the GPFS: Administration and Programming Reference.
The default remote file copy program is rcp. You can designate the use of a different remote file copy
command by specifying its fully-qualified path name on the mmcrcluster command or the mmchcluster
command. The remote file copy command must adhere to the same syntax as rcp, but it can implement
an alternate authentication mechanism. Many clusters use scp instead of rcp, as rcp cannot be used in a
cluster that contains Windows Server nodes.
Clusters that include both UNIX and Windows nodes must use scp for the remote copy command. For
more information, see Installing and configuring OpenSSH.
Clusters that only include Windows nodes may use the mmwinrcp utility that comes with GPFS. The
fully-qualified path name is /usr/lpp/mmfs/bin/mmwinrcp. For more information about configuring
Windows GPFS clusters, see the topic that discusses the mmwinservctl command in the GPFS:
Administration and Programming Reference.
The nodes that you plan to use for administering GPFS must be able to copy files using the remote file
copy command to and from any other node in the cluster without the use of a password and without
producing any extraneous messages.
For additional information, see “Requirements for administering a GPFS file system” in the GPFS:
Administration and Programming Reference.
Cluster name
Provide a name for the cluster by issuing the -C option on the mmcrcluster command. If the
user-provided name contains dots, it is assumed to be a fully qualified domain name. Otherwise, to make
the cluster name unique in a multiple cluster environment, GPFS appends the domain name of the
primary cluster configuration server. If the -C option is not specified, the cluster name defaults to the
hostname of the primary cluster configuration server. The name of the cluster may be changed at a later
time by issuing the -C option on the mmchcluster command.
The cluster name is applicable when GPFS file systems are mounted by nodes belonging to other GPFS
clusters. See the mmauth and the mmremotecluster commands.
Whether or not GPFS automatically starts is determined using the autoload parameter of the
mmchconfig command. The default is not to automatically start GPFS on all nodes. You may change this
by specifying autoload=yes using the mmchconfig command. This eliminates the need to start GPFS by
issuing the mmstartup command when a node is booted.
The autoload parameter can be set the same or differently for each node in the cluster. For example, it
may be useful to set autoload=no on a node that is undergoing maintenance since operating system
upgrades and other software can often require multiple reboots to be completed.
14 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Cluster configuration file
GPFS provides default configuration values, so a cluster configuration file is not required to create a
cluster.
This optional file can be useful if you already know the correct parameter values based on previous
testing or if you are restoring a cluster and have a backup copy of configuration values that apply to
most systems. Typically, however, this option is not used at cluster creation time, and configuration
parameters are modified after the cluster is created (using the mmchconfig command).
The full text of the Licensing Agreement is provided with the installation media and can be found at the
IBM Software license agreements website (www.ibm.com/software/sla/sladb.nsf).
The type of license that is associated with any one node depends on the functional roles that the node
has been designated to perform.
GPFS Client license
The GPFS Client license permits exchange of data between nodes that locally mount the same
GPFS file system. No other export of the data is permitted. The GPFS Client may not be used for
nodes to share GPFS data directly through any application, service, protocol or method, such as
Network File System (NFS), Common Internet File System (CIFS), File Transfer Protocol (FTP), or
Hypertext Transfer Protocol (HTTP). For these functions, a GPFS Server license would be
required.
GPFS FPO license
The GPFS FPO license permits the licensed node to perform NSD server functions for sharing
GPFS data with other nodes that have a GPFS FPO or GPFS server license. This license cannot be
used to share data with nodes that have a GPFS client license or non-GPFS nodes.
GPFS server license
The GPFS server license permits the licensed node to perform GPFS management functions such
as cluster configuration manager, quorum node, manager node, and Network Shared Disk (NSD)
server. In addition, the GPFS Server license permits the licensed node to share GPFS data directly
through any application, service protocol or method such as NFS, CIFS, FTP, or HTTP.
These licenses are all valid for use in the GPFS Express Edition, GPFS Standard Edition, and GPFS
Advanced Edition.
The GPFS license designation is achieved by issuing the appropriate mmchlicense command. The
number and type of licenses currently in effect for the cluster can be viewed using the mmlslicense
command. See the GPFS: Administration and Programming Reference for more information about these
commands.
Disk considerations
Designing a proper storage infrastructure for your GPFS file systems is key to achieving performance and
reliability goals. When deciding what disk configuration to use, you should consider three key areas:
infrastructure, performance, and disk access method.
Infrastructure
v Ensure that you have sufficient disks to meet the expected I/O load. In GPFS terminology, a
disk may be a physical disk or a RAID device.
16 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
– Placing files in a specific storage pool when the files are created
– Migrating files from one storage pool to another
– File deletion based on file characteristics
See the GPFS: Advanced Administration Guide for more information.
On Windows, GPFS will only create NSDs from empty disk drives. mmcrnsd accepts Windows Basic disk
or Unknown/Not Initialized disks. It always re-initializes these disks so that they become Basic GPT Disks
with a single GPFS partition. NSD data is stored in GPFS partitions. This allows other operating system
components to recognize that the disks are used. mmdelnsd deletes the partition tables created by
mmcrnsd.
A new NSD format was introduced with GPFS 4.1. The new format is referred to as NSD v2, and the old
format is referred to as NSD v1. The NSD v1 format is compatible with GPFS releases prior to 4.1. The
latest GPFS release recognizes both NSD v1 and NSD v2 formatted disks.
Administrators do not need to select one format or the other when managing NSDs. GPFS will always
create and use the correct format based on the minReleaseLevel for the cluster and the file system
version. When minReleaseLevel (as reported by mmlsconfig) is less than 4.1.0.0, mmcrnsd will only
create NSD v1 formatted disks. When minReleaseLevel is at least 4.1.0.0, mmcrnsd will only create NSD
v2 formatted disks. In this second case, however, the NSD format may change dynamically when the
NSD is added to a file system so that the NSD is compatible with the file system version.
On Linux, NSD v2 formatted disks include a GUID Partition Table (GPT) with a single partition. The GPT
allows other operating system utilities to recognize when a disk is owned by GPFS, which helps prevent
inadvertent data corruption. After running mmcrnsd, Linux utilities like parted can show the partition
table. When an NSD v2 formatted disk is added to a 3.5 or older file system, its format is changed to
NSD v1 and the partition table is converted to an MBR (MS-DOS compatible) type.
The mmcrnsd command expects as input a stanza file. For details, see the following GPFS: Administration
and Programming Reference topics:
v “Stanza files”
v “mmchdisk command”
v “mmchnsd command”
v “mmcrfs command”
v “mmcrnsd command”
You should consider what you want as the default behavior for switching between local access and NSD
server access in the event of a failure. To set this configuration, use the -o useNSDserver file system
mount option of the mmmount, mount, mmchfs, and mmremotefs commands to:
v Specify the disk discovery behavior
v Limit or eliminate switching from either:
– Local access to NSD server access
– NSD server access to local access
You should consider specifying how long to wait for an NSD server to come online before allowing a file
system mount to fail because the server is not available. The mmchconfig command has these options:
nsdServerWaitTimeForMount
When a node is trying to mount a file system whose disks depend on NSD servers, this option
specifies the number of seconds to wait for those servers to come up. If a server recovery is
taking place, the wait time you are specifying with this option starts after recovery completes.
Notes:
1. When a node rejoins a cluster, it resets all the failure times it knew about within that cluster.
2. Because a node that rejoins a cluster resets its failure times within that cluster, the NSD server
failure times are also reset.
3. When a node attempts to mount a file system, GPFS checks the cluster formation criteria first.
If that check falls outside the window, it will then check for NSD server fail times being in the
window.
18 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
File system descriptor quorum
A GPFS structure called the file system descriptor is initially written to every disk in the file system and is
replicated on a subset of the disks as changes to the file system occur, such as the adding or deleting of
disks. Based on the number of failure groups and disks, GPFS creates one to five replicas of the
descriptor:
v If there are at least five different failure groups, five replicas are created.
v If there are at least three different disks, three replicas are created.
v If there are only one or two disks, a replica is created on each disk.
Once it decides how many replicas to create, GPFS picks disks to hold the replicas, so that all replicas are
in different failure groups, if possible, to reduce the risk of multiple failures. In picking replica locations,
the current state of the disks is taken into account. Stopped or suspended disks are avoided. Similarly,
when a failed disk is brought back online, GPFS might rebalance the file system descriptors in order to
assure reliability across the failure groups. The disks used to hold the file system descriptor replicas can
be seen by running the mmlsdisk fsname -L command and looking for the string desc in the Remarks
column.
GPFS requires that a majority of the replicas on the subset of disks remain available to sustain file system
operations:
v If there are at least five different replicas, GPFS can tolerate a loss of two of the five replicas.
v If there are at least three replicas, GPFS can tolerate a loss of one of the three replicas.
v If there are fewer than three replicas, a loss of one replica might make the descriptor inaccessible.
The loss of all disks in a disk failure group might cause a majority of file systems descriptors to become
unavailable and inhibit further file system operations. For example, if your file system is backed up by
three or more disks that are assigned to two separate disk failure groups, one of the failure groups will
be assigned two of the file system descriptor replicas, while the other failure group will be assigned only
one replica. If all of the disks in the disk failure group that contains the two replicas were to become
unavailable, the file system would also become unavailable. To avoid this particular scenario, you might
want to introduce a third disk failure group consisting of a single disk that is designated as a descOnly
disk. This disk would exist solely to contain a replica of the file system descriptor (that is, it would not
contain any file system metadata or data). This disk should be at least 128MB in size.
For more information on this topic, see “Network Shared Disk (NSD) creation considerations” on page 17
and the topic "Establishing disaster recovery for your GPFS cluster" in the GPFS: Advanced Administration
Guide.
Preparing your environment for use of extended count key data (ECKD) devices
If your GPFS cluster includes Linux on System z instances, do not use virtual reserve/release. Instead,
follow the process described in Sharing DASD without Using Virtual Reserve/Release
(www.ibm.com/support/knowledgecenter/SSB27U_6.3.0/com.ibm.zvm.v630.hcpa5/hcpa5259.htm). Data
integrity is handled by GPFS itself.
To prepare an ECKD device for GPFS, complete these steps on a single node:
1. Ensure that the ECKD device is online. To set it online, issue the following command:
chccwdev -e device_bus_id
Note: GPFS supports ECKD disks in either compatible disk layout (CDL) format or Linux disk layout
(LDL) format. The DASD must be formatted with a block size of 4096.
v To specify CDL format, issue the following command:
dasdfmt -d cdl device
There is no need to specify a block size value, as the default value is 4096.
v To specify LDL format, issue the following command:
dasdfmt -d ldl device
There is no need to specify a block size value, as the default value is 4096.
In both of these commands, device is the node of the device. For example:
dasdfmt -d cdl /dev/dasda
| 3. This step is for CDL disks only. It is an optional step because partitioning is optional for CDL disks.
| If you wish to partition the ECKD and create a single partition that spans the entire device, use the
| following command:
| fdasd -a device
| Notes:
| v This step is not required for LDL disks because the dasdfmt -d ldl command issued in the previous
| step automatically creates a single Linux partition on the disk.
| v If a CDL disk is partitioned, the partition name should be specified in the stanza input file for
| mmcrnsd. If a CDL disk is not partitioned, the disk name should be specified in the stanza input
| file.
| For more information about all of these commands, see the following:
v “Commands for Linux on System z” topic in Device Drivers, Features, and Commands
(www.ibm.com/support/knowledgecenter/api/content/linuxonibm/liaaf/lnz_r_dd.html) in the Linux
on System z library overview.
| v “Getting started with Elastic Storage for Linux on System z based on GPFS technology” white paper,
| available on the Welcome Page forGPFS in IBM Knowledge Center (www.ibm.com/support/
| knowledgecenter/SSFKCN/gpfs_welcome.html)
After preparing the environment, set the ECKD devices online on the other nodes.
Note: In the case of an ECKD device that is attached to two servers and is online on both, if you format
or partition the disk on one of the servers, the other server does not automatically know about that. You
must refresh the information about the disk on the other server. One way to do this is to set the disk
offline and then online again.
Always ensure that the ECKD devices are online before starting GPFS. To automatically set ECKD
devices online at system start, see the documentation for your Linux distribution.
20 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Each of these factors can help you to determine how much disk resource to devote to the file system,
which block size to choose, where to store data and metadata, and how many replicas to maintain. For
the latest supported file system size, see the GPFS FAQ in IBM Knowledge Center (www.ibm.com/
support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html).
Your GPFS file system is created by issuing the mmcrfs command. Table 2 details the file system creation
options specified on the mmcrfs command, which options can be changed later with the mmchfs
command, and what the default values are.
To move an existing file system into a new GPFS cluster, see Exporting file system definitions between
clusters in the GPFS: Advanced Administration Guide.
Table 2. File system creation options
Options mmcrfs mmchfs Default value
Device name of the file system.
X X none
See “Device name of the file system”
on page 24.
DiskDesc for each disk in your file Issue the mmadddisk or
system. mmdeldisk command to add
Note: The use of disk descriptors is or delete disks from the file
discouraged. X system. none
22 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Table 2. File system creation options (continued)
Options mmcrfs mmchfs Default value
-t DriveLetter
Note: If your cluster includes Windows nodes, the file system name should be no longer than 31
characters.
This can be changed at a later time by using the -A option on the mmchfs command.
Considerations:
24 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
1. GPFS mount traffic may be lessened by using the automount feature to mount the file system when it
is first accessed instead of at GPFS startup. Automatic mounts only produce additional control traffic
at the point that the file system is first used by an application or user. Mounts at GPFS startup on the
other hand produce additional control traffic at every GPFS startup. Thus startup of hundreds of
nodes at once may be better served by using automatic mounts.
2. Automatic mounts will fail if the node does not have the operating systems automount support
enabled for the file system.
3. When exporting file systems for NFS mounts, it may be useful to mount the file system when GPFS
starts.
Block size
The size of data blocks in a file system can be specified at file system creation by using the -B option on
the mmcrfs command or allowed to default to 256 KB. This value cannot be changed without re-creating
the file system.
GPFS supports these block sizes for file systems: 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, 2 MB, 4 MB, 8
MB and 16 MB (for GPFS Native RAID only). This value should be specified with the character K or M
as appropriate, for example: 512K or 4M. You should choose the block size based on the application set
that you plan to support and whether you are using RAID hardware.
The --metadata-block-size option on the mmcrfs command allows a different block size to be specified
for the system storage pool, provided its usage is set to metadataOnly. This can be especially beneficial if
the default block size is larger than 1 MB. Valid values are the same as those listed for the -B option.
If you plan to use RAID devices in your file system, a larger block size may be more effective and help
avoid the penalties involved in small block write operations to RAID devices. For example, in a RAID
configuration using 4 data disks and 1 parity disk (a 4+P RAID 5 configuration), which uses a 64 KB
stripe size, the optimal file system block size would be an integral multiple of 256 KB (4 data disks × 64
KB stripe size = 256 KB). A block size of an integral multiple of 256 KB results in a single data write that
encompasses the 4 data disks and a parity-write to the parity disk. If a block size smaller than 256 KB,
such as 64 KB, is used with the same RAID configuration, write performance is degraded by the
read-modify-write behavior. A 64 KB block size results in a single disk writing 64 KB and a subsequent
read from the three remaining disks in order to compute the parity that is then written to the parity disk.
The extra read degrades performance.
The choice of block size also affects the performance of certain metadata operations, in particular, block
allocation performance. The GPFS block allocation map is stored in blocks, similar to regular files. When
the block size is small:
v It takes more blocks to store a given amount of data resulting in additional work to allocate those
blocks
v One block of allocation map data contains less information
Note: The choice of block size is particularly important for large file systems. For file systems larger than
100 TB, you should use a block size of at least 256 KB.
The block size is the largest contiguous amount of disk space allocated to a file and therefore the largest
amount of data that can be accessed in a single I/O operation. The subblock is the smallest unit of disk
space that can be allocated. For a block size of 256 KB, GPFS reads as much as 256 KB of data in a single
I/O operation and small files can occupy as little as 8 KB of disk space.
When -S relatime is specified, the file access time is updated only if the existing access time is older than
the value of the atimeDeferredSeconds configuration attribute or the existing file modification time is
greater than the existing access time.
mtime values
mtime is a standard file attribute that represents the time when the file was last modified. The -E
parameter controls when the mtime is updated. The default is -E yes, which results in standard interfaces
including the stat() and fstat() calls reporting exact mtime values. Specifying -E no results in the stat()
and fstat() calls reporting the mtime value available at the completion of the last sync period. This may
result in the calls not always reporting the exact mtime. Setting -E no can affect backup operations that
rely on last modified time or the operation of policies using the MODIFICATION_TIME file attribute.
For more information, see the topic, Exceptions to the Open Group technical standards in the GPFS:
Administration and Programming Reference.
When allocating blocks for a given file, GPFS first uses a round-robin algorithm to spread the data across
all of the disks in the file system. After a disk is selected, the location of the data block on the disk is
determined by the block allocation map type.
26 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
This parameter for a given file system is specified at file system creation by using the -j option on the
mmcrfs command, or allowing it to default. This value cannot be changed after the file system has been
created.
Avoid specifying nfs4 or all unless files will be exported to NFS V4 clients or the file system will be
mounted on Windows.
Strict replication
Strict replication means that data or metadata replication is performed at all times, according to the
replication parameters specified for the file system. If GPFS cannot perform the file system's replication,
an error is returned. These are the choices:
no Strict replication is not enforced. GPFS tries to create the needed number of replicas, but returns
an errno of EOK if it can allocate at least one replica.
whenpossible
Strict replication is enforced if the disk configuration allows it. If the number of failure groups is
insufficient, strict replication is not enforced. This is the default value.
always
Indicates that strict replication is enforced.
The use of strict replication can be specified at file system creation by using the -K option on the mmcrfs
command. The default is whenpossible. This value can be changed using the mmchfs command.
Metadata and data replication are specified independently. Each has a default replication factor of 1 (no
replication) and a maximum replication factor with a default of 2. Although replication of metadata is
less costly in terms of disk space than replication of file data, excessive replication of metadata also
affects GPFS efficiency because all metadata replicas must be written. In general, more replication uses
more space.
If you want to change the data replication factor for the entire file system, the data disk in each storage
pool must have a number of failure groups equal to or greater than the replication factor. For example,
you will get a failure with error messages if you try to change the replication factor for a file system to 2
but the storage pool has only one failure group.
When creating a GPFS file system, over-estimate the number of nodes that will mount the file system.
This input is used in the creation of GPFS data structures that are essential for achieving the maximum
degree of parallelism in file system operations (see GPFS architecture). Although a larger estimate
consumes a bit more memory, insufficient allocation of these data structures can limit the ability to
process certain parallel requests efficiently, such as the allotment of disk space to a file. If you cannot
predict the number of nodes, allow the default value to be applied. Specify a larger number if you expect
to add nodes, but avoid wildly overestimating as this can affect buffer operations.
You can change the number of nodes using the mmchfs command. Changing this value affects storage
pools created after the value was set; so, for example, if you need to increase this value on a storage pool,
you could change the value, create a new storage pool, and migrate the data from one pool to the other.
The number of available drive letters restricts the number of file systems that can be mounted on
Windows.
Note: Certain applications give special meaning to drive letters A:, B:, and C:, which could cause
problems if they are assigned to a GPFS file system.
28 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Mountpoint directory
Every GPFS file system has a default mount point associated with it. This mount point can be specified
and changed with the -T option of the mmcrfs and mmchfs commands. If you do not specify a mount
point when you create the file system, GPFS will set the default mount point to /gpfs/DeviceName.
Enabling quotas
The GPFS quota system can help you control file system usage. Quotas can be defined for individual
users, groups of users, or filesets. Quotas can be set on the total number of files and the total amount of
data space consumed. When setting quota limits for a file system, the system administrator should
consider the replication factors of the file system. Quota management takes replication into account when
reporting on and determining if quota limits have been exceeded for both block and file usage. In a file
system that has either data replication or metadata replication set to a value of two, the values reported
on by both the mmlsquota and mmrepquota commands are double the value reported by the ls
command.
Whether or not to enable quotas when a file system is mounted may be specified at file system creation
by using the -Q option on the mmcrfs command or changed at a later time by using the -Q option on the
mmchfs command. After the file system has been mounted, quota values are established by issuing the
mmedquota command and activated by issuing the mmquotaon command. The default is to not have
quotas activated.
GPFS levels are defined at three limits that you can explicitly set using the mmedquota and
mmdefedquota commands:
Soft limit
Defines levels of disk space and files below which the user, group of users, or fileset can safely
operate.
Specified in units of kilobytes (k or K), megabytes (m or M), or gigabytes (g or G). If no suffix is
provided, the number is assumed to be in bytes.
Hard limit
Defines the maximum amount of disk space and number of files the user, group of users, or
fileset can accumulate.
Specified in units of kilobytes (k or K), megabytes (m or M), or gigabytes (g or G). If no suffix is
provided, the number is assumed to be in bytes.
Grace period
Allows the user, group of users, or fileset to exceed the soft limit for a specified period of time.
The default period is one week. If usage is not reduced to a level below the soft limit during that
time, the quota system interprets the soft limit as the hard limit and no further allocation is
allowed. The user, group of users, or fileset can reset this condition by reducing usage enough to
fall below the soft limit; or the administrator can increase the quota levels using the mmedquota
or mmdefedquota.
Default quotas
Applying default quotas provides all new users, groups of users, or filesets with established minimum
quota limits. If default quota values are not enabled, new users, new groups, or new filesets have a quota
value of zero, which establishes no limit to the amount of space that can be used.
The .quota files are read from the root directory when mounting a file system with quotas enabled. When
these files are read, one of three possible actions take place:
v The files contain quota information and the user wants these files to be used.
v The files contain quota information, however, the user wants different files to be used.
To specify the use of different files, the mmcheckquota command must be issued prior to the mount of
the file system.
v The files do not contain quota information. In this case the mount fails and appropriate error messages
are issued. See the GPFS: Problem Determination Guide for further information regarding mount failures.
Enable DMAPI
Whether or not the file system can be monitored and managed by the GPFS Data Management API
(DMAPI) may be specified at file system creation by using the -z option on the mmcrfs command or
changed at a later time by using the -z option on the mmchfs command. The default is not to enable
DMAPI for the file system.
For further information about DMAPI for GPFS, see the GPFS: Data Management API Guide.
Important: Using mmcrfs -v no on a disk that already belongs to a file system will corrupt that file
system.
30 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Changing the file system format to the latest level
You can change the file system format to the latest format supported by the currently-installed level of
GPFS by issuing the mmchfs command with the -V full option or the -V compat option. The full option
enables all new functionality that requires different on-disk data structures. This may cause the file
system to become permanently incompatible with earlier releases of GPFS. The compat option enables
only changes that are backward compatible with the previous GPFS release. If all GPFS nodes that are
accessing a file system (both local and remote) are running the latest level, then it is safe to use full
option. Certain features may require you to run the mmmigratefs command to enable them.
You can determine the inode size (-i) and subblock size (value of the -B parameter / 32) of a file system
by running the mmlsfs command. The maximum number of files in a file system may be specified at file
system creation by using the --inode-limit option on the mmcrfs command, or it may be increased at a
later time by using --inode-limit on the mmchfs command. This value defaults to the size of the file
system at creation divided by 1 MB and cannot exceed the architectural limit. When a file system is
created, 4084 inodes are used by default; these inodes are used by GPFS for internal system files.
The --inode-limit option applies only to the root fileset. When there are multiple inode spaces, use the
--inode-space option of the mmchfileset command to alter the inode limits of independent filesets. The
mmchfileset command can also be used to modify the root inode space. The --inode-space option of the
mmlsfs command shows the sum of all inode spaces.
Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never
deallocated. When setting the maximum number of inodes in a file system, there is the option to
preallocate inodes. However, in most cases there is no need to preallocate inodes because, by default,
inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate
more inodes than will be used; otherwise, the allocated inodes will unnecessarily consume metadata
space that cannot be reclaimed.
These options limit the maximum number of files that may actively exist within a file system. However,
the maximum number of files in the file system may be restricted further by GPFS so the control
structures associated with each file do not consume all of the file system space.
Enter:
mmcrfs /dev/gpfs2 -F /tmp/gpfs2dsk -A yes -B 256K -n 32 -m 2 -M 2 -r 1 -R 2 -T /gpfs2
32 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
-m 2 Default number of metadata replicas
-M 2 Maximum number of metadata replicas
-r 1 Default number of data replicas
-R 2 Maximum number of data replicas
-j scatter Block allocation type
-D nfs4 File locking semantics in effect
-k all ACL semantics in effect
-n 32 Estimated number of nodes that will mount file system
-B 262144 Block size
-Q none Quotas accounting enabled
none Quotas enforced
none Default quotas enabled
--perfileset-quota yes Per-fileset quota enforcement
--filesetdf yes Fileset df enabled?
-V 14.10 (4.1.0.4) File system version
--create-time Fri Aug 8 18:39:47 2014 File system creation time
-z no Is DMAPI enabled?
-L 262144 Logfile size
-E yes Exact mtime mount option
-S yes Suppress atime mount option
-K whenpossible Strict replica allocation option
--fastea yes Fast external attributes enabled?
--encryption no Encryption enabled?
--inode-limit 2015232 Maximum number of inodes
--log-replicas 0 Number of log replicas (max 2)
--is4KAligned yes is4KAligned?
--rapid-repair yes rapidRepair enabled?
-P system Disk storage pools in file system
-d gpfs1001nsd;gpfs1002nsd Disks in file system
-A yes Automatic mount option
-o none Additional mount options
-T /gpfs2 Default mount point
--mount-priority 0 Mount priority
Before installing GPFS, you should review “Planning for GPFS” on page 3 and the GPFS FAQ in IBM
Knowledge Center (www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/
gpfs_faqs/gpfsclustersfaq.html).
Installing GPFS without ensuring that the prerequisites listed in “Hardware requirements” on page 3 and
“Software requirements” on page 4 are satisfied can lead to undesired results.
Ensure that the PATH environment variable for the root user on each node includes /usr/lpp/mmfs/bin.
(This is not required for the operation of GPFS, but it can simplify administration.)
GPFS commands operate on all nodes required to perform tasks. When you are administering a cluster, it
may be useful to have a more general form of running commands on all of the nodes. One suggested
way to do this is to use an OS utility like dsh or pdsh that can execute commands on all nodes in the
cluster. For example, you can use dsh to check the kernel version of each node in your cluster:
# dsh uname -opr
Node01: 2.6.18-128.1.14.e15 x86_64 GNU/Linux
Node02: 2.6.18-128.1.14.e15 x86_64 GNU/Linux
Once you have dsh set up, you can use it to install GPFS on a large cluster. For details about setting up
dsh or a similar utility, review the documentation for the utility.
Before installing GPFS, it is necessary to verify that you have the correct levels of the prerequisite
software installed on each node in the cluster. If the correct level of prerequisite software is not installed,
see the appropriate installation manual before proceeding with your GPFS installation.
For the most up-to-date list of prerequisite software, see the GPFS FAQ in IBM Knowledge Center
(www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/
gpfsclustersfaq.html).
Before installing GPFS, you need to extract the RPMs from the archive.
1. Copy the self-extracting product image, gpfs_install-4.1*, from the CD-ROM to a local directory
(where * is the correct version of the product for your hardware platform and Linux distribution). For
example:
cp /media/cdrom/gpfs_install-4.1.0-0_x86_64 /tmp/gpfs_install-4.1.0-0_x86_64
2. Verify that the self-extracting program has executable permissions, for example:
# ls -l /tmp/gpfs_install-4.1.0-0_x86_64
36 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
3. Invoke the self-extracting image that you copied from the CD-ROM and accept the license agreement:
a. By default, the LAP Tool, JRE, and GPFS installation images are extracted to the target directory
/usr/lpp/mmfs/4.1.
b. The license agreement files on the media can be viewed in graphics mode or text-only mode:
v Graphics mode is the default behavior. To view the files in graphics mode, invoke
gpfs_install-4.1*. Using the graphics-mode installation requires a window manager to be
configured.
v To view the files in text-only mode, add the --text-only option. When run in text-only mode, the
output explains how to accept the agreement:
<...Last few lines of output...>
Press Enter to continue viewing the license agreement, or
enter "1" to accept the agreement, "2" to decline it, "3"
to print it, "4" to read non-IBM terms, or "99" to go back
to the previous screen.
c. You can use the --silent option to accept the license agreement automatically.
d. Use the --help option to obtain usage information from the self-extracting archive.
The following is an example of how to extract the software using text mode:
/tmp/gpfs_install_4.1.0-0_x86_64 --text-only
Upon license agreement acceptance, the GPFS product installation images are placed in the extraction
target directory (/usr/lpp/mmfs/4.1). This directory contains the following GPFS SLES and Red Hat
Enterprise Linux RPMs:
gpfs.base-4.1.*.rpm
gpfs.gpl-4.1.*.noarch.rpm
gpfs.docs-4.1.*.noarch.rpm
gpfs.msg.en_US-4.1.*.noarch.rpm
gpfs.gskit-8.0.50-16.*.rpm
gpfs.ext-4.1.*.rpm (GPFS Standard Edition and GPFS Advanced Edition only)
gpfs.crypto-4.1.*.rpm (GPFS Advanced Edition only)
In this directory there is a license subdirectory that contains license agreements in multiple languages. To
view which languages are provided, issue the following command:
# ls /usr/lpp/mmfs/4.1/license
Note: For information about restrictions pertaining to Debian Linux, see the GPFS FAQ in IBM
Knowledge Center (www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/
gpfs_faqs/gpfsclustersfaq.html).
GPFS patches (update SLES and Red Hat Enterprise Linux RPMs and Debian Linux packages) are
available from the IBM Support Portal: Downloads for General Parallel File System (www.ibm.com/
support/entry/portal/Downloads/Software/Cluster_software/General_Parallel_File_System).
The update SLES and Red Hat Enterprise Linux RPMs and Debian Linux packages are distributed in a
different form from that of the base software; they are stored in a tar file. Use the tar command to extract
the update SLES and Red Hat Enterprise Linux RPMs or Debian Linux packages into a local directory.
Recommendation: Because you need to install the base software SLES or Red Hat Enterprise Linux
RPMs or Debian Linux packages completely before installing a patch level, it is recommended that you
place update SLES or Red Hat Enterprise Linux RPMs or Debian Linux packages in a separate directory
from the base SLES or Red Hat Enterprise Linux RPMs or Debian Linux packages. This enables you to
simplify the installation process by using the rpm or dpkg command with wildcards (rpm -ivh *.rpm or
dpkg -i *.deb), first on the directory containing the base SLES or Red Hat Enterprise Linux RPMs or
Debian Linux packages, then on the directory containing the update SLES or Red Hat Enterprise Linux
RPMs or Debian Linux packages. (There is no license acceptance required on patches, so once you have
extracted the contents of the tar file, the update SLES or Red Hat Enterprise Linux RPMs or Debian Linux
packages are available for installation.)
If you are applying a patch during the initial installation of GPFS on a node, you only need to build the
portability layer once after the base and update SLES or Red Hat Enterprise Linux RPMs or Debian Linux
packages are installed.
You do not need to install the gpfs.docs RPM on all nodes if man pages are not desired (for example, if
local file system space on the node is minimal).
38 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
gpfs.ext-4.1.*.rpm (GPFS Standard Edition and GPFS Advanced Edition only)
gpfs.crypto-4.1.*.rpm (GPFS Advanced Edition only)
The following packages are required for Debian Linux:
gpfs.base-4.1.*.deb
gpfs.gpl-4.1.*all.deb
gpfs.msg.en_US-4.1.*all.deb
gpfs.gskit-8.0.50-16.*.deb
gpfs.ext-4.1.*.deb (GPFS Standard Edition and GPFS Advanced Edition only)
gpfs.crypto-4.1.*.deb (GPFS Advanced Edition only)
Optional packages
The following package is optional for SLES and Red Hat Enterprise Linux:
gpfs.docs-4.1.*noarch.rpm
The following package is optional for Debian Linux:
gpfs.docs-4.1.*all.deb
The following package is required only if GPFS Native RAID on Power® 775 will be used:
gpfs.gnr-4.1.*.ppc64.rpm
The following packages are required (and provided) only on the IBM System x® GPFS Storage
Server (GSS):
gpfs.gnr-4.1.*.x86_64.rpm
gpfs.platform-4.1.*.x86_64.rpm
gpfs.gss.firmware-4.1.*.x86_64.rpm
To install all of the GPFS SLES or Red Hat Enterprise Linux RPMs, issue the following command:
rpm -ivh /usr/lpp/mmfs/4.1/gpfs*.rpm
To install all of the GPFS Debian Linux packages, issue the following command:
dpkg -i /usr/lpp/mmfs/4.1/gpfs*.deb
Verifying the GPFS installation on SLES and Red Hat Enterprise Linux
nodes
You can verify the installation of the GPFS SLES or Red Hat Enterprise Linux RPMs on each node. To
check that the software has been successfully installed, use the rpm command:
rpm -qa | grep gpfs
If you have the GPFS Standard Edition or the GPFS Advanced Edition installed, you should also see the
following line in the output:
gpfs.ext-4.1.0-0
If you have the GPFS Advanced Edition installed, you should also see the following line in the output:
gpfs.crypto-4.1.0-0
For installations that include GPFS Native RAID, you should also see the following line in the output:
If you have the GPFS Standard Edition or the GPFS Advanced Edition installed, you should also see the
following line in the output:
ii gpfs.ext 4.1.0-0 GPFS Extended Features
If you have the GPFS Advanced Edition installed, you should also see the following line in the output:
ii gpfs.crypto 4.1.0-0 GPFS Cryptographic Subsystem
The GPFS portability layer is a loadable kernel module that allows the GPFS daemon to interact with the
operating system.
Note: The GPFS kernel module should be updated any time the Linux kernel is updated. Updating the
GPFS kernel module after a Linux kernel update requires rebuilding and installing a new version of the
module.
| To build the GPFS portability layer using this tool, enter the following command:
| /usr/lpp/mmfs/bin/mmbuildgpl
40 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
| Each kernel module is specific to a Linux version and platform. If you have multiple nodes running
| exactly the same operating system level on the same platform, and only some of these nodes have a
| compiler available, you can build the kernel module on one node, then create an RPM that contains the
| binary module for ease of distribution.
If you choose to generate an RPM package for portability layer binaries, perform the following additional
step:
| /usr/lpp/mmfs/bin/mmbuildgpl --buildrpm
When the command finishes, it displays the location of the generated RPM:
<...Last line of output...>
Wrote: /usr/src/redhat/RPMS/x86_64/gpfs.gplbin-2.6.18-128.1.14.e15-3.3.0-1.x86_64.rpm
You can then copy the generated RPM package to other machines for deployment. The generated RPM
can only be deployed to machines with identical architecture, distribution level, Linux kernel, and GPFS
maintenance level.
Note: During the package generation, temporary files are written to the /tmp/rpm directory, so be sure
there is sufficient space available. By default, the generated RPM goes to /usr/src/packages/RPMS/<arch>
for SUSE Linux Enterprise Server and /usr/src/redhat/RPMS/<arch> for Red Hat Enterprise Linux.
Before starting GPFS, perform the following steps on each Linux on System z node:
1. In the /etc/zipl.conf file, add vmalloc=4096G user_mode=home as shown in the following example:
(10:25:41) dvtc1a:~ # cat /etc/zipl.conf
# Modified by YaST2. Last modification on Mon May 19 09:39:04 EDT 2014
[defaultboot]
defaultmenu = menu
Note: For SUSE Linux Enterprise Server (SLES) 11 and Red Hat Enterprise Linux 6, user_mode=home is
optional.
2. Run the zipl command.
Note: For information about the zipl command, see the “Initial program loader for System z -zipl”
topic in Device Drivers, Features, and Commands (www.ibm.com/support/knowledgecenter/api/
content/linuxonibm/liaaf/lnz_r_dd.html) in the Linux on System z library overview.
3. Reboot the node.
| Note: For more detailed information about installation and startup of GPFS on System z, see the “Getting
| started with Elastic Storage for Linux on System z based on GPFS technology” white paper, available on
| the Welcome Page forGPFS in IBM Knowledge Center (www.ibm.com/support/knowledgecenter/
| SSFKCN/gpfs_welcome.html).
Accessibility features
The following list includes the major accessibility features in GPFS:
v Keyboard-only operation
v Interfaces that are commonly used by screen readers
v Keys that are discernible by touch but do not activate just by touching them
v Industry-standard devices for ports and connectors
v The attachment of alternative input and output devices
IBM Knowledge Center, and its related publications, are accessibility-enabled. The accessibility features
are described in IBM Knowledge Center (www.ibm.com/support/knowledgecenter).
Keyboard navigation
This product uses standard Microsoft Windows navigation keys.
IBM may not offer the products, services, or features discussed in this document in other countries.
Consult your local IBM representative for information on the products and services currently available in
your area. Any reference to an IBM product, program, or service is not intended to state or imply that
only that IBM product, program, or service may be used. Any functionally equivalent product, program,
or service that does not infringe any IBM intellectual property right may be used instead. However, it is
the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or
service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not grant you any license to these patents. You can send
license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property
Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law:
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of
the materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
IBM Corporation
Dept. 30ZA/Building 707
Mail Station P300
2455 South Road,
Poughkeepsie, NY 12601-5400
U.S.A.
Such information may be available, subject to appropriate terms and conditions, including in some cases,
payment or a fee.
The licensed program described in this document and all licensed material available for it are provided
by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or
any equivalent agreement between us.
Any performance data contained herein was determined in a controlled environment. Therefore, the
results obtained in other operating environments may vary significantly. Some measurements may have
been made on development-level systems and there is no guarantee that these measurements will be the
same on generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of
those products.
This information contains examples of data and reports used in daily business operations. To illustrate
them as completely as possible, the examples include the names of individuals, companies, brands, and
products. All of these names are fictitious and any similarity to the names and addresses used by an
actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs
in any form without payment to IBM, for the purposes of developing, using, marketing or distributing
application programs conforming to the application programming interface for the operating platform for
which the sample programs are written. These examples have not been thoroughly tested under all
conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these
programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be
liable for any damages arising out of your use of the sample programs.
If you are viewing this information softcopy, the photographs and color illustrations may not appear.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
“Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.
46 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Intel is a trademark of Intel Corporation or its subsidiaries in the United States and other countries.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or
its affiliates.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and Windows NT are trademarks of Microsoft Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Notices 47
48 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Glossary
This glossary provides terms and definitions for D
the GPFS product.
Data Management Application Program
Interface (DMAPI)
The following cross-references are used in this
The interface defined by the Open
glossary:
Group's XDSM standard as described in
v See refers you from a nonpreferred term to the the publication System Management: Data
preferred term or from an abbreviation to the Storage Management (XDSM) API Common
spelled-out form. Application Environment (CAE) Specification
v See also refers you to a related or contrasting C429, The Open Group ISBN
term. 1-85912-190-X.
deadman switch timer
For other terms and definitions, see the IBM
A kernel timer that works on a node that
Terminology website (www.ibm.com/software/
has lost its disk lease and has outstanding
globalization/terminology) (opens in new
I/O requests. This timer ensures that the
window).
node cannot complete the outstanding
I/O requests (which would risk causing
B
file system corruption), by causing a
block utilization panic in the kernel.
The measurement of the percentage of
dependent fileset
used subblocks per allocated blocks.
A fileset that shares the inode space of an
existing independent fileset.
C
disk descriptor
cluster
A definition of the type of data that the
A loosely-coupled collection of
disk contains and the failure group to
independent systems (nodes) organized
which this disk belongs. See also failure
into a network for the purpose of sharing
group.
resources and communicating with each
other. See also GPFS cluster. disk leasing
A method for controlling access to storage
cluster configuration data
devices from multiple host systems. Any
The configuration data that is stored on
host that wants to access a storage device
the cluster configuration servers.
configured to use disk leasing registers
cluster manager for a lease; in the event of a perceived
The node that monitors node status using failure, a host system can deny access,
disk leases, detects failures, drives preventing I/O operations with the
recovery, and selects file system storage device until the preempted system
managers. The cluster manager is the has reregistered.
node with the lowest node number
disposition
among the quorum nodes that are
The session to which a data management
operating at a particular time.
event is delivered. An individual
control data structures disposition is set for each type of event
Data structures needed to manage file from each file system.
data and metadata cached in memory.
domain
Control data structures include hash
A logical grouping of resources in a
tables and link pointers for finding
network for the purpose of common
cached data; lock states and tokens to
management and administration.
implement distributed locking; and
various flags and sequence numbers to
keep track of updates to the cached data.
50 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
GPFS cluster which are important for running intranet
A cluster of nodes defined as being and other high-performance e-business
available for use by GPFS file systems. file servers.
GPFS portability layer junction
The interface module that each A special directory entry that connects a
installation must build for its specific name in a directory of one fileset to the
hardware platform and Linux root directory of another fileset.
distribution.
K
GPFS recovery log
A file that contains a record of metadata kernel The part of an operating system that
activity, and exists for each node of a contains programs for such tasks as
cluster. In the event of a node failure, the input/output, management and control of
recovery log for the failed node is hardware, and the scheduling of user
replayed, restoring the file system to a tasks.
consistent state and allowing other nodes
to continue working. M
master encryption key (MEK)
I
A key used to encrypt other keys. See also
ill-placed file encryption key.
A file assigned to one storage pool, but
MEK See master encryption key.
having some or all of its data in a
different storage pool. metadata
A data structures that contain access
ill-replicated file
information about file data. These include:
A file with contents that are not correctly
inodes, indirect blocks, and directories.
replicated according to the desired setting
These data structures are not accessible to
for that file. This situation occurs in the
user applications.
interval between a change in the file's
replication settings or suspending one of metanode
its disks, and the restripe of the file. The one node per open file that is
responsible for maintaining file metadata
independent fileset
integrity. In most cases, the node that has
A fileset that has its own inode space.
had the file open for the longest period of
indirect block continuous time is the metanode.
A block containing pointers to other
mirroring
blocks.
The process of writing the same data to
inode The internal structure that describes the multiple disks at the same time. The
individual files in the file system. There is mirroring of data protects it against data
one inode for each file. loss within the database or within the
recovery log.
inode space
A collection of inode number ranges multi-tailed
reserved for an independent fileset, which A disk connected to multiple nodes.
enables more efficient per-fileset
functions. N
ISKLM namespace
IBM Security Key Lifecycle Manager. For Space reserved by a file system to contain
GPFS encryption, the ISKLM is used as an the names of its objects.
RKM server to store MEKs.
Network File System (NFS)
A protocol, developed by Sun
J
Microsystems, Incorporated, that allows
journaled file system (JFS) any host in a network to gain access to
A technology designed for another host or netgroup and their file
high-throughput server environments, directories.
Glossary 51
Network Shared Disk (NSD) operating systems without requiring
A component for cluster-wide disk changes to the source code.
naming and access.
primary GPFS cluster configuration server
NSD volume ID In a GPFS cluster, the node chosen to
A unique 16 digit hex number that is maintain the GPFS cluster configuration
used to identify and access all NSDs. data.
node An individual operating-system image private IP address
within a cluster. Depending on the way in A IP address used to communicate on a
which the computer system is partitioned, private network.
it may contain one or more nodes.
public IP address
node descriptor A IP address used to communicate on a
A definition that indicates how GPFS uses public network.
a node. Possible functions include:
manager node, client node, quorum node, Q
and nonquorum node.
quorum node
node number A node in the cluster that is counted to
A number that is generated and determine whether a quorum exists.
maintained by GPFS as the cluster is
quota The amount of disk space and number of
created, and as nodes are added to or
inodes assigned as upper limits for a
deleted from the cluster.
specified user, group of users, or fileset.
node quorum
quota management
The minimum number of nodes that must
The allocation of disk blocks to the other
be running in order for the daemon to
nodes writing to the file system, and
start.
comparison of the allocated space to
node quorum with tiebreaker disks quota limits at regular intervals.
A form of quorum that allows GPFS to
run with as little as one quorum node R
available, as long as there is access to a
Redundant Array of Independent Disks (RAID)
majority of the quorum disks.
A collection of two or more disk physical
non-quorum node drives that present to the host an image
A node in a cluster that is not counted for of one or more logical disk drives. In the
the purposes of quorum determination. event of a single physical device failure,
the data can be read or regenerated from
P the other disk drives in the array due to
data redundancy.
policy A list of file-placement, service-class, and
encryption rules that define characteristics recovery
and placement of files. Several policies The process of restoring access to file
can be defined within the configuration, system data when a failure has occurred.
but only one policy set is active at one Recovery can involve reconstructing data
time. or providing alternative routing through a
different server.
policy rule
A programming statement within a policy remote key management server (RKM server)
that defines a specific action to be A server that is used to store master
performed. encryption keys.
pool A group of resources with similar replication
characteristics and attributes. The process of maintaining a defined set
of data in more than one location.
portability
Replication involves copying designated
The ability of a programming language to
changes for one location (a source) to
compile successfully on different
another (a target), and synchronizing the
data in both locations.
52 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
RGD Recovery group data. source node
The node on which a data management
RKM server
event is generated.
See remote key management server.
stand-alone client
rule A list of conditions and actions that are
The node in a one-node cluster.
triggered when certain conditions are met.
Conditions include attributes about an storage area network (SAN)
object (file name, type or extension, dates, A dedicated storage network tailored to a
owner, and groups), the requesting client, specific environment, combining servers,
and the container name associated with storage products, networking products,
the object. software, and services.
storage pool
S A grouping of storage space consisting of
SAN-attached volumes, logical unit numbers (LUNs), or
Disks that are physically attached to all addresses that share a common set of
nodes in the cluster using Serial Storage administrative characteristics.
Architecture (SSA) connections or using
stripe group
Fibre Channel switches.
The set of disks comprising the storage
Scale Out Backup and Restore (SOBAR) assigned to a file system.
A specialized mechanism for data
striping
protection against disaster only for GPFS
A storage process in which information is
file systems that are managed by Tivoli®
split into blocks (a fixed amount of data)
Storage Manager (TSM) Hierarchical
and the blocks are written to (or read
Storage Management (HSM).
from) a series of disks in parallel.
secondary GPFS cluster configuration server
subblock
In a GPFS cluster, the node chosen to
The smallest unit of data accessible in an
maintain the GPFS cluster configuration
I/O operation, equal to one thirty-second
data in the event that the primary GPFS
of a data block.
cluster configuration server fails or
becomes unavailable. system storage pool
A storage pool containing file system
Secure Hash Algorithm digest (SHA digest)
control structures, reserved files,
A character string used to identify a GPFS
directories, symbolic links, special devices,
security key.
as well as the metadata associated with
session failure regular files, including indirect blocks and
The loss of all resources of a data extended attributes The system storage
management session due to the failure of pool can also contain user data.
the daemon on the session node.
T
session node
The node on which a data management token management
session was created. A system for controlling file access in
which each application performing a read
Small Computer System Interface (SCSI)
or write operation is granted some form
An ANSI-standard electronic interface
of access to a specific block of file data.
that allows personal computers to
Token management provides data
communicate with peripheral hardware,
consistency and controls conflicts. Token
such as disk drives, tape drives, CD-ROM
management has two components: the
drives, printers, and scanners faster and
token management server, and the token
more flexibly than previous interfaces.
management function.
snapshot
token management function
An exact copy of changed data in the
A component of token management that
active files and directories of a file system
requests tokens from the token
or fileset at a single point in time. See also
fileset snapshot, global snapshot.
Glossary 53
management server. The token
management function is located on each
cluster node.
token management server
A component of token management that
controls tokens relating to the operation
of the file system. The token management
server is located at the file system
manager node.
twin-tailed
A disk connected to two nodes.
U
user storage pool
A storage pool containing the blocks of
data that make up user files.
V
VCD See vdisk configuration data.
VFS See virtual file system.
vdisk configuration data (VCD)
Configuration data associated with a disk.
virtual file system (VFS)
A remote file system that has been
mounted so that it is accessible to the
local user.
virtual node (vnode)
The structure that contains information
about a file system object in a virtual file
system (VFS).
54 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Index
A DASD for NSDs, preparing 19
data
access control lists (ACLs) guarding against failure of a path to a disk 9
file system authorization 27 recoverability 5
accessibility features for the GPFS product 43 replication 27
adapter Data Management API (DMAPI)
invariant address requirement 3 enabling 30
atime value 26 Debian Linux packages (update), extracting 38
autoload attribute 14 default quotas
description 29
dfcommand (specifying whether it will report numbers based
B on quotas for the fileset) 31
bin directory 35 direct access storage devices (DASD) for NSDs, preparing 19
block directory, bin 35
size 25 disk descriptor replica 19
block allocation map 26 disk usage
verifying 30
disks
C considerations 15
failure 8
commands mmcrfs command 24
failure of 13 stanza files 17
mmchcluster 13, 14 storage area network 15
mmcheckquota 30 documentation
mmchfs 21, 27, 28, 29 installing man pages on Linux nodes 38
mmcrcluster 11, 13, 14
mmcrfs 21, 27, 28, 29
mmcrnsd 17
mmdefedquota 29 E
mmdefquotaon 29 ECKD devices, preparing environment for 19
mmdelnsd 17 electronic license agreement
mmedquota 29, 30 Linux nodes 36
mmlsdisk 19 enabling file system features 31
mmlsquota 29, 30 environment for ECKD devices, preparing 19
mmrepquota 29, 30 environment, preparing 35
mmstartup 14 estimated node count 28
mmwinservctl 13, 14 extracting GPFS patches 38
remote file copy extracting the GPFS software 36
rcp 14 extracting update Debian Linux packages 38
scp 14 extracting update SUSE Linux Enterprise Server and Red Hat
remote shell Enterprise Linux RPMs 38
rsh 13
ssh 13
communication F
GPFS daemon to daemon 12 failure
invariant address requirement 3 disk 8
configuration Network Shared Disk server 8
of a GPFS cluster 11 node 5
configuration and tuning settings failure groups 19
configuration file 15 loss of 19
default values 15 preventing loss of data access 8
controlling the order in which file systems are mounted 32 use of 19
created files (maximum number) 31 file system descriptor 19
creating GPFS directory failure groups 19
/tmp/gpfslpp on Linux nodes 36 inaccessible 19
quorum 19
file system features
D enabling 31
daemon file system manager
communication 12 internal log file 27
starting 14 NSD creation considerations 18
I N
installing GPFS (Debian) Network File System (NFS)
verifying the GPFS installation 40 access control lists 27
installing GPFS on Linux nodes deny-write open lock 24
building your GPFS portability layer 40
56 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Network Shared Disk (NSD) quotas (continued)
creation of 17 values reported in a replicated file system 29
server disk considerations 15
server failure 8
server node considerations 18
node quorum
R
rcp command 14
definition of 6
recoverability
selecting nodes 8
disk failure 8
node quorum with tiebreaker disks
node failure 5
definition of 6
parameters 5
selecting nodes 8
recovery time
nodes
reducing with Persistent Reserve 10
descriptor form 12
reduced recovery time using Persistent Reserve 10
designation as manager or client 12
Redundant Array of Independent Disks (RAID)
estimating the number of 28
block size considerations 25
failure 5
preventing loss of data access 8
in a GPFS cluster 11
remote command environment
in the GPFS cluster 12
rcp 14
quorum 12
rsh 13
nofilesetdf option 31
scp 14
notices 45
ssh 13
number of files that can be created, maximum 31
replication
affect on quotas 29
preventing loss of data access 8
O reporting numbers based on quotas for the fileset 31
order in which file systems are mounted, controlling the 32 requirements
hardware 3
software 4
P RPMs (update), extracting SUSE Linux Enterprise Server and
Linux 38
patches (GPFS), extracting 38
rsh command 13
patches (GPFS), extracting SUSE Linux Enterprise Server and
running GPFS for Linux on System z 41
Red Hat Enterprise Linux 38
patent information 45
Persistent Reserve
reduced recovery time 10 S
planning considerations 3 scp command 14
cluster creation 11 shell PATH 35
disks 15 sizing file systems 21
file system creation 21 soft limit, quotas 29
GPFS license designation 15 softcopy documentation 38
GPFS product structure 4 software requirements 4
hardware requirements 3 Specifying whether the dfcommand will report numbers based
recoverability 5 on quotas for the fileset 31
software requirements 4 ssh command 13
portability layer starting GPFS 14
building 40 Storage Area Network (SAN)
preparing direct access storage devices (DASD) for NSDs 19 disk considerations 15
preparing environment for ECKD devices 19 strict replication 27
preparing the environment 35 subblocks, use of 25
programming specifications SUSE Linux Enterprise Server and Linux RPMs (update),
Linux prerequisite software 35 extracting 38
verifying prerequisite software 35 System z, DASD tested with Linux on 19
System z, running GPFS for Linux on 41
Q
quorum T
definition of 6 tiebreaker disks 6
during node failure 5 trademarks 46
file system descriptor 19
selecting nodes 8
quotas
default quotas 29
U
update Debian Linux packages, extracting 38
description 29
update SUSE Linux Enterprise Server and Red hat Enterprise
in a replicated system 29
Linux RPMs, extracting 38
mounting a file system with quotas enabled 30
system files 30
Index 57
V
verifying
GPFS for Linux installation 39
GPFS installation (Debian) 40
prerequisite software for Linux nodes 35
verifying disk usage 30
W
windows drive letter 28
58 GPFS Documentation Update: GPFS 4.1.0.5 Information (Applied to GPFS 4.1.0.4 Information Units)
Printed in USA