SDS 4.2.1 Reference Guide
SDS 4.2.1 Reference Guide
1 Reference Guide
Sun Microsystems, Inc. 901 San Antonio Road Palo Alto, CA 94303-4900 U.S.A.
Part Number 806320410 February 2000
Copyright 2000 Sun Microsystems, Inc. 901 San Antonio Road, Palo Alto, California 94303-4900 U.S.A. All rights reserved. This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation. No part of this product or document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and other countries, exclusively licensed through X/Open Company, Ltd. Sun, Sun Microsystems, the Sun logo, SunDocs, SunExpress, Open Windows, Solstice, Solstice AdminSuite, Solstice Backup, SPARCstorage, SunNet Manager, Online:DiskSuite, AutoClient, NFS, Solstice DiskSuite, Solaris Web Start and Solaris are trademarks, registered trademarks, or service marks of Sun Microsystems, Inc. in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. Prestoserve TM Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun The OPEN LOOK and Sun acknowledges the pioneering efforts of Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry. Sun holds a non-exclusive license from Xerox to the Xerox Graphical User Interface, which license also covers Suns licensees who implement OPEN LOOK GUIs and otherwise comply with Suns written license agreements. RESTRICTED RIGHTS: Use, duplication, or disclosure by the U.S. Government is subject to restrictions of FAR 52.22714(g)(2)(6/87) and FAR 52.22719(6/87), or DFAR 252.2277015(b)(6/95) and DFAR 227.72023(a). DOCUMENTATION IS PROVIDED AS IS AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. Copyright 2000 Sun Microsystems, Inc. 901 San Antonio Road, Palo Alto, Californie 94303-4900 Etats-Unis. Tous droits rservs. Ce produit ou document est protg par un copyright et distribu avec des licences qui en restreignent lutilisation, la copie, la distribution, et la dcompilation. Aucune partie de ce produit ou document ne peut tre reproduite sous aucune forme, par quelque moyen que ce soit, sans lautorisation pralable et crite de Sun et de ses bailleurs de licence, sil y en a. Le logiciel dtenu par des tiers, et qui comprend la technologie relative aux polices de caractres, est protg par un copyright et licenci par des fournisseurs de Sun. Des parties de ce produit pourront tre drives du systme Berkeley BSD licencis par lUniversit de Californie. UNIX est une marque dpose aux Etats-Unis et dans dautres pays et licencie exclusivement par X/Open Company, Ltd. Sun, Sun Microsystems, le logo Sun, SunDocs, SunExpress, Open Windows, Solstice, Solstice AdminSuite, Solstice Backup, SPARCstorage, SunNet Manager, Online:DiskSuite, AutoClient, NFS, Solstice DiskSuite et Solaris sont des marques de fabrique ou des marques dposes, ou marques de service, de Sun Microsystems, Inc. aux Etats-Unis et dans dautres pays. Toutes les marques SPARC sont utilises sous licence et sont des marques de fabrique ou des marques dposes de SPARC International, Inc. aux Etats-Unis et dans dautres pays. Les produits portant les marques SPARC sont bass sur une architecture dveloppe par Sun Microsystems, Inc.Prestoserve Linterface dutilisation graphique OPEN LOOK et SunTM a t dveloppe par Sun Microsystems, Inc. pour ses utilisateurs et licencis. Sun reconnat les efforts de pionniers de Xerox pour la recherche et le dveloppement du concept des interfaces dutilisation visuelle ou graphique pour lindustrie de linformatique. Sun dtient une licence non exclusive de Xerox sur linterface dutilisation graphique Xerox, cette licence couvrant galement les licencis de Sun qui mettent en place linterface dutilisation graphique OPEN LOOK et qui en outre se conforment aux licences crites de Sun. CETTE PUBLICATION EST FOURNIE EN LETAT ET AUCUNE GARANTIE, EXPRESSE OU IMPLICITE, NEST ACCORDEE, Y COMPRIS DES GARANTIES CONCERNANT LA VALEUR MARCHANDE, LAPTITUDE DE LA PUBLICATION A REPONDRE A UNE UTILISATION PARTICULIERE, OU LE FAIT QUELLE NE SOIT PAS CONTREFAISANTE DE PRODUIT DE TIERS. CE DENI DE GARANTIE NE SAPPLIQUERAIT PAS, DANS LA MESURE OU IL SERAIT TENU JURIDIQUEMENT NUL ET NON AVENU.
Please Recycle
Contents
Command Line Interface 19 Overview of DiskSuite Objects 20 Metadevices 21 How Are Metadevices Used? Metadevice Conventions 23 Example Metadevice Consisting of Two Slices 24 Metadevice State Database and State Database Replicas How Does DiskSuite Use State Database Replicas? Metadevice State Database Conventions Hot Spare Pools 27 27 27 26 24 25 22
How Do Hot Spare Pools Work? Metadevice and Disk Space Expansion The growfs(1M) Command 28 System and Startup Files 29
Disksets 30 2. Metadevices 31 Simple Metadevices 31 Concatenated Metadevice (Concatenation) 32 Concatenated Metadevice Conventions 33 Example Concatenated Metadevice 33 Striped Metadevice (Stripe) 34 34
Example Concatenated Stripe 36 Simple Metadevices and Starting Blocks 38 Mirrors 38 Submirrors 39 40 40
Mirror Conventions
Mirror Read and Write Policies 42 Mirror Robustness RAID5 Metadevices 44 RAID5 Metadevice Conventions Example RAID5 Metadevice 45 45 43
Example Concatenated (Expanded) RAID5 Metadevice 46 UFS Logging or Trans Metadevices 48 UFS Logging 48 UFS Logging Conventions 4 48
Trans Metadevices
49 49 50 51
53
DiskSuite Tool and the Command Line Interface 60 Using the Mouse in DiskSuite Tool Screen Descriptions for DiskSuite Tool Metadevice Editor Window 61 Disk View Window 64 68 60 61
Statistics Graphs Window (Grapher Window) Information Windows 69 Browsers 93 Dialog Boxes 98
Accessing and Using Help 101 Tool Registry 102 Event Notication 5. Disksets 105
Contents 5
102
105 105
Example Two Shared Disksets 107 Administering Disksets 108 Reserving a Diskset 109 Releasing a Diskset 6. 109 111
Overview of the md.tab File 111 Creating Initial State Database Replicas in the md.tab File 112 Creating a Striped Metadevice in the md.tab File 112 Creating a Concatenated Metadevice in the md.tab File 113 Creating a Concatenated Stripe in the md.tab File 113 Creating a Mirror in the md.tab File 114 Creating a Trans Metadevice in the md.tab File 114 Creating a RAID5 Metadevice in the md.tab File 115 Creating a Hot Spare Pool in the md.tab File 115 Overview of the md.cf File 116 7. Conguration Guidelines Introduction 117 117
Conguration Planning Overview 117 Conguration Planning Guidelines 118 Concatenation Guidelines 118 Striping Guidelines 118 Mirroring Guidelines 119 RAID5 Guidelines 120
State Database Replica Guidelines for Performance 121 File System Guidelines 121 6
Solstice DiskSuite 4.2.1 Reference Guide February 2000
General Performance Guidelines 122 RAID5 Metadevices and Striped Metadevices 122 Optimizing for Random I/O and Sequential I/O 123 Random I/O 123
127 127
Summary of State Database Replicas A. DiskSuite Error Messages 129 Introduction 129
Dialog Box Warning Messages 140 Dialog Box Information Messages 146 Metadevice Editor Window Messages 147 Disk View Window Messages 152 Log Messages 153
DiskSuite Command Line Messages 157 Error Messages Log Messages B. 158 172
Upgrading Solaris With Solstice DiskSuite 177 How to Upgrade Solaris With Solstice DiskSuite 177 Glossary 181
Contents 7
Index 191
Tables
TABLE P1 TABLE P2 TABLE 11 TABLE 12 TABLE 13 TABLE 14 TABLE 21 TABLE 22 TABLE 41 TABLE 42 TABLE 43 TABLE 44 TABLE 45 TABLE 46 TABLE 47 TABLE 48 TABLE 49 TABLE 410 TABLE 411
Typographic Conventions Shell Prompts Command Line Interface Commands Summary of DiskSuite Objects Types of Metadevices 22 23 20 19
DiskSuite Tool vs. the Command Line DiskSuite Tool Mouse Model 60
60
71 72
Disk Information Screen, SPARCstorage Array Functionality Slice Information Window Functionality Device Statistics Window Functionality Concat Information Window Functionality Stripe Information Window Functionality Mirror Information Window Functionality Trans Information Window Functionality 73 74 76 78 80 83 85
TABLE 412 TABLE 413 TABLE 414 TABLE 415 TABLE 416 TABLE 417 TABLE 418 TABLE 419 TABLE 420 TABLE 421 TABLE 422
87 89
Metadevice State Database Information Window Functionality Tray Information Window Functionality Controller Information Window Functionality 91 92
Controller Information Window, SPARCstorage Array Functionality Slice Browser Device List Information 94 95 95
92
Metadevice Browser Device List Information Hot Spare Pool Device List Information Slice Filter Window Items Dialog Boxes 99 102 97
10
Figures
Figure 11 Figure 21 Figure 22 Figure 23 Figure 24 Figure 25 Figure 26 Figure 27 Figure 28 Figure 29 Figure 31 Figure 41 Figure 42 Figure 43 Figure 44 Figure 45 Figure 46 Figure 47 Figure 48
Relationship Among a Metadevice, Physical Disks, and Slices Concatenation Example Striped Metadevice Example Concatenated Stripe Example Mirror Example 41 44 46 47 33 36 37
24
51
62
Disk View Window Color Drop Sites Disk View Objects Disk View Panner
68
11
Figure 49 Figure 410 Figure 411 Figure 412 Figure 413 Figure 414 Figure 415 Figure 416 Figure 417 Figure 418 Figure 419 Figure 420 Figure 421 Figure 422 Figure 423 Figure 424 Figure 425 Figure 426 Figure 427 Figure 51 Figure 71
Disk Information Window Slice Information Window Device Statistics Window Concat Information Window Stripe Information Window Mirror Information Window Trans Information Window Hot Spare Information Window RAID Information Window
70 73 74 76 78 80 83 85 87 89
Metadevice State Database Information Window Tray Information Window Controller Information Window Slice Browser Window Slice Filter Window Finder Window Example Dialog Box 98 99 100 100 101 94 97 91 92
Conguration Log Window Problem List Window DiskSuite Tool Help Utility Disksets Example 108
120
12
Preface
SolsticeTM DiskSuiteTM 4.2.1 is a software product that manages data and disk drives. DiskSuite runs on all SPARCTM systems running SolarisTM 8 and on all x86 systems running Solaris 8. DiskSuites diskset feature is supported only on the SPARC platform edition of Solaris. This feature is not supported on x86 systems.
Caution - If you do not use DiskSuite correctly, you can destroy data. As a
minimum safety precaution, make sure you have a current backup of your data before using DiskSuite.
Chapter 3 describes DiskSuite hot spares and hot spare pools. Chapter 4 describes the DiskSuite graphical user interface. Chapter 5 describes shared disksets. Chapter 6 describes how to use various DiskSuite les to perform specic functions. Chapter 7 provides conguration and planning information for using DiskSuite. Appendix A describes DiskSuite Tools error, status, and log messages, and the command line error and log messages. Appendix B describes how to upgrade to later versions of Solaris while using DiskSuite metadevices. Glossary provides denitions of DiskSuite terminology.
Related Books
Sun documentation related to DiskSuite and disk maintenance and conguration includes:
4 Solstice DiskSuite 4.2.1 Users Guide 4 Solstice DiskSuite 4.2.1 Installation and Product Notes 4 System Administration Guide, Volume I
14
Solstice DiskSuite 4.2.1 Reference Guide February 2000
Typographic Conventions
Meaning Example
Edit your .login le. Use ls -a to list all les. machine_name% You have mail.
AaBbCc123
What you type, contrasted with on-screen computer output Command-line placeholder: replace with a real name or value
AaBbCc123
AaBbCc123
Read Chapter 6 in Users Guide. These are called class options. You must be root to do this.
Preface
15
TABLE P2
Shell Prompts
Prompt machine_name% machine_name# $ #
Shell C shell prompt C shell superuser prompt Bourne shell and Korn shell prompt Bourne shell and Korn shell superuser prompt
16
CHAPTER
Introduction to DiskSuite
This chapter explains the overall structure of DiskSuite. Use the following table to proceed directly to the section that provides the information you need.
4 What Does DiskSuite Do? on page 17 4 How Does DiskSuite Manage Disks? on page 18 4 DiskSuite Tool on page 18 4 Command Line Interface on page 19 4 Overview of DiskSuite Objects on page 20 4 Metadevices on page 21 4 Metadevice State Database and State Database Replicas on page 24 4 Hot Spare Pools on page 27 4 Metadevice and Disk Space Expansion on page 27 4 System and Startup Files on page 29 4 Disksets on page 30
DiskSuite Tool
DiskSuite Tool is a graphical user interface for setting up and administering a DiskSuite conguration. The command to start DiskSuite Tool is:
# metatool &
DiskSuite Tool provides a graphical view of DiskSuite objectsmetadevices, hot spare pools, and the MetaDB object for the metadevice state database. DiskSuite Tool uses drag and drop manipulation of DiskSuite objects, enabling you to quickly congure your disks or change an existing conguration.
18
DiskSuite Tool provides graphical views of both physical devices and metadevices, helping simplify storage administration. You can also perform tasks specic to administering SPARCstorageTM Arrays using DiskSuite Tool. However, DiskSuite Tool cannot perform all DiskSuite administration tasks. You must use the command line interface for some operations (for example, creating and administering disksets). To learn more about using DiskSuite Tool, refer to Chapter 4.
Introduction to DiskSuite
19
TABLE 11
(continued)
Replaces slices of submirrors and RAID5 metadevices. Sets up system les for mirroring root (/). Administers disksets. Displays status for metadevices or hot spare pools. Resyncs metadevices during reboot. Runs the DiskSuite Tool graphical user interface. Attaches a metadevice to a mirror, or a logging device to a trans metadevice.
20
TABLE 12
DiskSuite Object
A group of physical slices that appear to the system as a single, logical device A database that stores information on disk about the state of your DiskSuite conguration
To increase storage capacity and increase data availability. DiskSuite cannot operate until you have created the metadevice state database replicas.
A collection of slices (hot spares) reserved to be automatically substituted in case of slice failure in either a submirror or RAID5 metadevice
Note - DiskSuite Tool, DiskSuites graphical user interface, also refers to the
graphical representation of metadevices, the metadevice state database, and hot spare pools as objects.
Metadevices
A metadevice is a name for a group of physical slices that appear to the system as a single, logical device. Metadevices are actually pseudo, or virtual, devices in standard UNIX terms. You create a metadevice by using concatenation, striping, mirroring, RAID level 5, or UFS logging. Thus, the types of metadevices you can create are concatenations, stripes, concatenated stripes, mirrors, RAID5 metadevices, and trans metadevices. DiskSuite uses a special driver, called the metadisk driver, to coordinate I/O to and from physical devices and metadevices, enabling applications to treat a metadevice like a physical device. This type of driver is also called a logical, or pseudo, driver. You can use either the DiskSuite Tool graphical user interface or the command line utilities to create and administer metadevices. Table 13 summarizes the types of metadevices:
Introduction to DiskSuite
21
TABLE 13
Types of Metadevices
Description Can be used directly, or as the basic building blocks for mirrors and trans devices. There are three types of simple metadevices: stripes, concatenations, and concatenated stripes. Simple metadevices consist only of physical slices. By themselves, simple metadevices do not provide data redundancy. Replicates data by maintaining multiple copies. A mirror is composed of one or more simple metadevices called submirrors. Replicates data by using parity information. In the case of missing data, the missing data can be regenerated using available data and the parity information. A RAID5 metadevice is composed of slices. One slices worth of space is allocated to parity information, but it is distributed across all slices in the RAID5 metadevice. Used to log a UFS le system. A trans metadevice is composed of a master device and a logging device. Both of these devices can be a slice, simple metadevice, mirror, or RAID5 metadevice. The master device contains the UFS le system.
Metadevice Simple
Mirror
RAID5
Trans
4 SPARC IPI, SCSI devices, and SPARCStorage Array drives 4 x86 SCSI and IDE devices
22
Metadevice Conventions
4 How are metadevices named?
Metadevice names begin with the letter d followed by a number (for example, d0 as shown in Table 14).
Introduction to DiskSuite
23
The metarename(1M) command with the x option can switch metadevices that have a parent-child relationship. Refer to Solstice DiskSuite 4.2.1 Users Guide for procedures to rename and switch metadevices.
Disk A
c0t0d0s2
c0t0d0s2 c1t0d0s2 Disk B d0
c1t0d0s2
Figure 11
The metadevice state database is actually a collection of multiple, replicated database copies. Each copy, referred to as a state database replica, ensures that the data in the database is always valid. Having copies of the metadevice state database protects against data loss from single points-of-failure. The metadevice state database tracks the location and status of all known state database replicas. DiskSuite cannot operate until you have created the metadevice state database and its state database replicas. It is necessary that a DiskSuite conguration have an operating metadevice state database. When you set up your conguration, you have two choices for the location of state database replicas. You can place the state database replicas on dedicated slices. Or you can place the state database replicas on slices that will later become part of metadevices. DiskSuite recognizes when a slice contains a state database replica, and automatically skips over the portion of the slice reserved for the replica if the slice is used in a metadevice. The part of a slice reserved for the state database replica should not be used for any other purpose. You can keep more than one copy of a metadevice state database on one slice, though you may make the system more vulnerable to a single point-of-failure by doing so.
4 The system will stay running with exactly half or more state database replicas. 4 The system will panic if more than half the state database replicas are not available. 4 The system will not reboot without one more than half the total state database replicas.
Introduction to DiskSuite
25
Note - When the number of state database replicas is odd, DiskSuite computes the
majority by dividing the number in half, rounding down to the nearest integer, then adding 1 (one). For example, on a system with seven replicas, the majority would be four (seven divided by two is three and one-half, rounded down is three, plus one is four). During booting, DiskSuite ignores corrupted state database replicas. In some cases DiskSuite tries to rewrite state database replicas that are bad. Otherwise they are ignored until you repair them. If a state database replica becomes bad because its underlying slice encountered an error, you will need to repair or replace the slice and then enable the replica. If all state database replicas are lost, you could, in theory, lose all data that is stored on your disks. For this reason, it is good practice to create enough state database replicas on separate drives and across controllers to prevent catastrophic failure. It is also wise to save your initial DiskSuite conguration information, as well as your disk partition information. Refer to Solstice DiskSuite 4.2.1 Users Guide for information on adding additional state database replicas to the system, and on recovering when state database replicas are lost.
4 Can I create a state database replica on a slice that will be part of a metadevice?
26
Solstice DiskSuite 4.2.1 Reference Guide February 2000
Yes, but you must create it before adding the slice to the metadevice. You can also create a state database replica on a logging device. DiskSuite reserves the starting part of the slice for the state database replica.
4 Can I place more than one state database replica on a single disk drive?
In general, it is best to distribute state database replicas across slices, drives, and controllers, to avoid single points-of-failure. If you have two disks, create two state database replicas on each disk.
4 What happens if a slice that contains a state database replica becomes errored?
The rest of your conguration should remain in operation. DiskSuite nds a good state database (as long as there are at least half + 1 valid state database replicas).
27
your data is always a good idea.) After the metadevice is expanded, you grow the le system with the growfs(1M) command. After a le system is expanded, it cannot be decreased. Decreasing the size of a le system is a UFS limitation. Applications and databases using the raw metadevice must have their own method to grow the added space so that the application or database can recognize it. DiskSuite does not provide this capability. You can expand the disk space in metadevices in the following ways: 1. Adding a slice to a stripe or concatenation. 2. Adding multiple slices to a stripe or concatenation. 3. Adding a slice or multiple slices to all submirrors of a mirror. 4. Adding one or more slices to a RAID5 device. You can use either DiskSuite Tool or the command line interface to add a slice to an existing metadevice.
Note - When using DiskSuite Tool to expand a metadevice that contains a UFS le
system, the growfs(1M) command is run automatically. If you use the command line to expand the metadevice, you must manually run the growfs(1M) command.
28
4 /etc/lvm/mddb.cf
A le that records the locations of state database replicas. When state database replica locations change, DiskSuite makes an entry in the mddb.cf le that records the locations of all state databases. Similar information is entered into the /etc/system le.
4 /etc/lvm/md.tab
An input le that you can use along with the command line utilities metainit(1M), metadb(1M), and metahs(1M) to create metadevices, state database replicas, or hot spares. A metadevice, group of state database replicas, or hot spare may have an entry in this le.
4 /etc/lvm/md.cf
A backup le of a local disksets conguration. DiskSuite provides the md.cf le for recovery. When you change the DiskSuite conguration, DiskSuite automatically updates the md.cf le (except for hot sparing).
Caution - You should not directly edit either the mddb.cf or md.cf les.
4 /kernel/drv/md.conf
DiskSuite uses this conguration le at startup. You can edit two elds in this le: nmd, which sets the number of metadevices that the conguration can support, and md_nsets, which is the number of disksets. The default value for nmd is 128, which can be increased to 1024. The default value for md_nsets is 4, which can be increased to 32. The total number of disksets is always one less than the md_nsets value, because the local set is included in md_nsets.
4 /etc/lvm/mdlodg.cf
DiskSuite uses this le to control the behavior of the DiskSuite mdlogd SNMP trap generating daemon. It is an editable ASCII le that species where the SNMP trap data should be sent when the DiskSuite driver detects a specied condition.
4 /etc/rcS.d/S35lvm.init
Introduction to DiskSuite
29
4 /etc/rc2.d/S95lvm.sync
For automatic resyncing of metadevices. For more information on DiskSuite system les, refer to the man pages.
Disksets
A shared diskset, or simply diskset, is a set of shared disk drives containing metadevices and hot spares that can be shared exclusively but not at the same time by two hosts. Currently, disksets are only supported on SPARCstorage Array disks. A diskset provides for data redundancy and availability. If one host fails, the other host can take over the failed hosts diskset. (This type of conguration is known as a failover conguration.) For more information, see Chapter 5.
30
CHAPTER
Metadevices
This chapter covers the different types of metadevices available in DiskSuite. Use the following table to proceed directly to the section that provides the information you need.
4 Simple Metadevices on page 31 4 Concatenated Metadevice (Concatenation) on page 32 4 Striped Metadevice (Stripe) on page 34 4 Concatenated Stripe on page 36 4 Simple Metadevices and Starting Blocks on page 38 4 Mirrors on page 38 4 RAID5 Metadevices on page 44 4 UFS Logging or Trans Metadevices on page 48
Simple Metadevices
A simple metadevice is a metadevice built only from slices, and is either used directly or as the basic building block for mirrors and trans metadevices. There are three kinds of simple metadevices: concatenated metadevices, striped metadevices, and concatenated striped metadevices. In practice, people tend to think of two basic simple metadevices: concatenated metadevices and striped metadevices. (A concatenated stripe is simply a striped metadevice that has been grown from its original conguration by concatenating slices.) Simple metadevices enable you to quickly and simply expand disk storage capacity. The drawback to a simple metadevice is that it does not provide any data 31
redundancy. A mirror or RAID5 metadevice can provide data redundancy. (If a single slice fails on a simple metadevice, data is lost.) You can use a simple metadevice containing multiple slices for any le system except the following:
4 Root (/) 4 /usr 4 swap 4 /var 4 /opt 4 Any le system accessed during an operating system upgrade or installation
Note - When you mirror root (/), /usr, swap, /var, or /opt, you put the le
system into a one-way concatenation (a concatenation of a single slice) that acts as a submirror. This is mirrored by another submirror, which is also a concatenation.
Note - To increase the capacity of a striped metadevice, you would have to build a
concatenated stripe (see Concatenated Stripe on page 36). A concatenated metadevice can also expand any active and mounted UFS le system without having to bring down the system. In general, the total capacity of a concatenated metadevice is equal to the total size of all the slices in the concatenated metadevice. If a concatenation contains a slice with a state database replica, the total capacity of the concatenation would be the sum of the slices less the space reserved for the replica. You can also create a concatenated metadevice from a single slice. You could, for example, create a single-slice concatenated metadevice. Later, when you need more storage, you can add more slices to the concatenated metadevice. Concatenations have names like other metadevices (d0, d1, and so forth). For more information on metadevice naming, see Table 14. 32
Solstice DiskSuite 4.2.1 Reference Guide February 2000
Physical Disk A
Metadevice d1
Chunk 1 Chunk 2
Physical Disk B
DiskSuite Software
Physical Disk C
Figure 21
Concatenation Example
Metadevices
33
Note - RAID5 metadevices also use an interlace value. See RAID5 Metadevices on
page 44 for more information.
Metadevices
35
Physical Disk A
Chunk 1 Chunk 4
Metadevice d2
Chunk 1 Chunk 2 Chunk 3 Chunk 4 Chunk 5 Chunk 6
Physical Disk B
Chunk 2 Chunk 5
DiskSuite Software
Physical Disk C
Figure 22
Chunk 3 Chunk 6
Concatenated Stripe
A concatenated stripe is a striped metadevice that has been expanded by concatenating additional slices (stripes).
Note - If you use DiskSuite Tool to drag multiple slices into an existing striped
metadevice, you are given the optional of making the slices into a concatenation or a stripe. If you use the metattach(1M) command to add multiple slices to an existing striped metadevice, they must be added as a stripe.
The rst stripe consists of three slices, Disks A through C, with an interlace of 16 Kbytes. The second stripe consists of two slices Disks D and E, and uses an interlace of 32 Kbytes. The last stripe consists of a two slices, Disks F and G. Because no interlace is specied for the third stripe, it inherits the value from the stripe before it, which in this case is 32 Kbytes. Sequential data chunks are addressed to the rst stripe until that stripe has no more space. Chunks are then addressed to the second stripe. When this stripe has no more space, chunks are addressed to the third stripe. Within each stripe, the data chunks are interleaved according to the specied interlace value.
Physical Disk A
Stripe 1
Physical Disk B
Physical Disk C
Metadevice d10
Chunk 1 Chunk 2 Chunk 3
Physical Disk D
Stripe 2
DiskSuite Software
Physical Disk E
...
Chunk 28
Physical Disk F
Stripe 3
Physical Disk G
Figure 23
37
Dbase No No No No No No No
In this example, stripe d0 shows a start block for each slice except the rst as block 1520. This is to preserve the disk label in the rst disk sector in all of the slices except the rst. The metadisk driver must skip at least the rst sector of those disks when mapping accesses across the stripe boundaries. Because skipping only the rst sector would create an irregular disk geometry, the entire rst cylinder of these disks is skipped. This enables higher level le system software (UFS) to optimize block allocations correctly. Thus, DiskSuite protects the disk label from being overwritten, and purposefully skips the rst cylinder. The reason for not skipping the rst cylinder on all slices in the concatenation or stripe has to do with UFS. If you create a concatenated metadevice from an existing le system, and add more space to it, you would lose data because the rst cylinder is where the data is expected to begin.
Mirrors
A mirror is a metadevice that can copy the data in simple metadevices (stripes or concatenations) called submirrors, to other metadevices. This process is called mirroring data. (Mirroring is also known as RAID level 1.) A mirror provides redundant copies of your data. These copies should be located on separate physical devices to guard against device failures. Mirrors require an investment in disks. You need at least twice as much disk space as the amount of data you have to mirror. Because DiskSuite must write to all
Solstice DiskSuite 4.2.1 Reference Guide February 2000
38
submirrors, mirrors can also increase the amount of time it takes for write requests to be written to disk. After you congure a mirror, it can be used just as if it were a physical slice. You can also use a mirror for online backups. Because the submirrors contain identical copies of data, you can take a submirror ofine and back up the data to another mediumall without stopping normal activity on the mirror metadevice. You might want to do online backups with a three-way mirror so that the mirror continues to copy data to two submirrors. Also, when the submirror is brought back online, it will take a while for it to sync its data with the other two submirrors. You can mirror any le system, including existing le systems. You can also use a mirror for any application, such as a database. You can create a one-way mirror and attach another submirror to it later.
Note - You can use DiskSuites hot spare feature with mirrors to keep data safe and
available. For information on hot spares, see Chapter 3. Mirrors have names like other metadevices (d0, d1, and so forth). For more information on metadevice naming, see Table 14. Each submirror (which is also a metadevice) has a unique device name.
Submirrors
A mirror is made of one or more stripes or concatenations. The stripes or concatenations within a mirror are called submirrors. (A mirror cannot be made of RAID5 metadevices.) A mirror can consist of up to three (3) submirrors. (Practically, creating a two-way mirror is usually sufcient. A third submirror enables you to make online backups without losing data redundancy while one submirror is ofine for the backup.) Submirrors are distinguished from simple metadevices in that normally they can only be accessed by the mirror. The submirror is accessible only through the mirror when you attach it to the mirror. If you take a submirror ofine, the mirror stops reading and writing to the submirror. At this point, you could access the submirror itself, for example, to perform a backup. However, the submirror is in a read-only state. While a submirror is ofine, DiskSuite keeps track of all writes to the mirror. When the submirror is brought back online, only the portions of the mirror that were written (resync regions) are resynced. Submirrors can also be taken ofine to troubleshoot or repair physical devices which have errors. Submirrors have names like other metadevices (d0, d1, and so forth). For more information on metadevice naming, see Table 14.
Metadevices
39
Submirrors can be attached or detached from a mirror at any time. To do so, at least one submirror must remain attached at all times. You can force a submirror to be detached using the -f option to the metadetach(1M) command. DiskSuite Tool always forces a mirror detach, so there is no extra option. Normally, you create a mirror with only a single submirror. Then you attach a second submirror after creating the mirror.
Mirror Conventions
4 Why would I use a mirror?
For maximum data availability. The trade-off is that a mirror requires twice the number of slices (disks) as the amount of data to be mirrored.
4 Why should I always create a one-way mirror then attach additional submirrors?
This ensures that a mirror resync is performed so that data is consistent in all submirrors.
40
(Metadevice d20)
Chunk 1 Chunk 2 Chunk 3 Chunk 4
(Metadevice d20)
Figure 24 Mirror Example
Mirror Options
The following options are available to optimize mirror performance:
4 Mirror read policy 4 Mirror write policy 4 The order in which mirrors are resynced (pass number)
You can dene mirror options when you initially create the mirror, or after a mirror has been set up. For tasks related to changing these options, refer to Solstice DiskSuite 4.2.1 Users Guide.
Mirror Resync
Mirror resynchronization is the process of copying data from one submirror to another after submirror failures, system crashes, when a submirror has been taken ofine and brought back online, or after the addition of a new submirror. While the resync takes place, the mirror remains readable and writable by users. A mirror resync ensures proper mirror operation by maintaining all submirrors with identical data, with the exception of writes in progress.
Note - A mirror resync is mandatory, and cannot be omitted. You do not need to
manually initiate a mirror resync; it occurs automatically.
Metadevices
41
Pass Number
The pass number, a number in the range 0-9, determines the order in which a particular mirror is resynced during a system reboot. The default pass number is one (1). Smaller pass numbers are resynced rst. If 0 is used, the mirror resync is skipped. A 0 should be used only for mirrors mounted as read-only. Mirrors with the same pass number are resynced at the same time.
TABLE 21
Geometric
First
TABLE 22
Serial
Mirror Robustness
DiskSuite cannot guarantee that a mirror will be able to tolerate multiple slice failures and continue operating. However, depending on the mirrors conguration, in many instances DiskSuite can handle a multiple-slice failure scenario. As long as multiple slice failures within a mirror do not contain the same logical blocks, the mirror continues to operate. (The submirrors must also be identically constructed.) Consider this example:
Metadevices
43
Physical Disk A
Physical Disk B
Physical Disk C
Physical Disk D
Physical Disk E
Physical Disk F
Figure 25
Mirror d1 consists of two stripes (submirrors), each of which consists of three identical physical disks and the same interlace value. A failure of three disks, A, B, and F can be tolerated because the entire logical block range of the mirror is still contained on at least one good disk. If, however, disks A and D fail, a portion of the mirrors data is no longer available on any disk and access to these logical blocks will fail. When a portion of a mirrors data is unavailable due to multiple slice errors, access to portions of the mirror where data is still available will succeed. Under this situation, the mirror acts like a single disk that has developed bad blocks; the damaged portions are unavailable, but the rest is available.
RAID5 Metadevices
RAID is an acronym for Redundant Array of Inexpensive Disks (or Redundant Array of Independent Disks). There are seven RAID levels, 0-6, each referring to a method of distributing data while ensuring data redundancy. (RAID level 0 does not provide data redundancy, but is usually included as a RAID classication because it is the basis for the majority of RAID congurations in use.) DiskSuite supports:
4 RAID level 0 (concatenations and stripes) 4 RAID level 1 (mirror) 4 RAID level 5 (striped metadevice with parity information)
RAID level 5 is striping with parity and data distributed across all disks. If a disk fails, the data on the failed disk can be rebuilt from the distributed data and parity information on the other disks. 44
Solstice DiskSuite 4.2.1 Reference Guide February 2000
Within DiskSuite, a RAID5 metadevice is a metadevice that supports RAID Level 5. DiskSuite automatically initializes a RAID5 metadevice when you add a new slice, or resyncs a RAID5 metadevice when you replace an existing slice. DiskSuite also resyncs RAID5 metadevices during rebooting if a system failure or panic took place. RAID5 metadevices have names like other metadevices (d0, d1, and so forth). For more information on metadevice naming, see Table 14.
4 What is the minimum number of slices that a RAID5 metadevice must have?
Three (3).
4 When I expand a RAID5 metadevice, are the new slices included in parity calculations?
Yes.
4 Is there a way to recreate a RAID5 metadevice without having to zero out the data blocks?
Yes. You can use the metainit(1M) command with the -k option. (There is no equivalent within DiskSuite Tool.) The -k option recreates the RAID5 metadevice without initializing it, and sets the disk blocks to the OK state. If any errors exist on disk blocks within the metadevice, DiskSuite may begin fabricating data. Instead of using this option, you may want to initialize the device and restore data from tape. See the metainit(1M) man page for more information.
45
The rst three data chunks are written to Disks A through C. The next chunk that is written is a parity chunk, written to Drive D, which consists of an exclusive OR of the rst three chunks of data. This pattern of writing data and parity chunks results in both data and parity spread across all disks in the RAID5 metadevice. Each drive can be read independently. The parity protects against a single disk failure. If each disk in this example were 2 Gbytes, the total capacity of d40 would be 6 Gbytes. (One drives worth of space is allocated to parity.)
Physical Disk A
Chunk 1 Chunk 4 Chunk 7 P(10-12)
Physical Disk B
Chunk 2 Chunk 5 P(7-9) Chunk 10
Physical Disk C
Chunk 3 P(4-6) Chunk 8 Chunk 11
Physical Disk D
P(1-3) Chunk 6 Chunk 9 Chunk 12
Metadevice d40
Chunk 1 Chunk 2 Chunk 3 Chunk 4 Chunk 5 Chunk 6 Chunk 7 Chunk 8 Chunk 9 Chunk 10 Chunk 11 Chunk 12
DiskSuite Software
Figure 26
46
Physical Disk A
Chunk 1 Chunk 4 Chunk 7 P(10-12, 16)
Physical Disk B
Chunk 2 Chunk 5 P(7-9, 15) Chunk 10
Physical Disk C
Chunk 3 P(4-6, 14) Chunk 8 Chunk 11
Physical Disk D
P(1-3, 13) Chunk 6 Chunk 9 Chunk 12
Physical Disk E
Chunk 13 Chunk 14 Chunk 15 Chunk 16
Metadevice d40
Chunk 1 Chunk 2 Chunk 3 Chunk 4 Chunk 5 Chunk 6 Chunk 7 Chunk 8 Chunk 9 Chunk 10 Chunk 11 Chunk 12 Chunk 13 Chunk 14 Chunk 15 Chunk 16
DiskSuite Software
Figure 27
The parity areas are allocated when the initial RAID5 metadevice is created. One columns (slices) worth of space is allocated to parity, although the actual parity blocks are distributed across all of the original columns to avoid hot spots. When you concatenate additional slices to the RAID, the additional space is devoted entirely to data; no new parity blocks are allocated. The data on the concatenated slices is, however, included in the parity calculations, so it is protected against single device failures. Concatenated RAID5 metadevices are not suited for long-term use. Use a concatenated RAID5 metadevice unitl it is possible to recongure a larger RAID5 metadevice and copy the data to the larger metadevice.
Note - When you add a new slice to a RAID5 metadevice, DiskSuite zeros all the
blocks in that slice. This ensures that the parity will protect the new data. As data is written to the additional space, DiskSuite includes it in the parity calculations.
Metadevices
47
48
Trans Metadevices
A trans metadevice is a metadevice that manages UFS logging. A trans metadevice consists of two devices: a master device and a logging device. A master device is a slice or metadevice that contains the le system that is being logged. Logging begins automatically when the trans metadevice is mounted, provided the trans metadevice has a logging device. The master device can contain an existing UFS le system (because creating a trans metadevice does not alter the master device), or you can create a le system on the trans metadevice later. Likewise, clearing a trans metadevice leaves the UFS le system on the master device intact. A logging device is a slice or metadevice that contains the log. A logging device can be shared by several trans metadevices. The log is a sequence of records, each of which describes a change to a le system. A trans metadevice has the same naming conventions as other metadevices: /dev/md/dsk/d0, d1 ...,d2, and so forth. (For more information on metadevice naming conventions, see Table 14.)
49
Caution - You must disable logging for /usr, /var, /opt, or any other le systems
used by the system during a Solaris upgrade or installation when installing or upgrading software on a Solaris system.
Logging Data
Trans Metadevice d1
Chunk 1 Chunk 2 Chunk 3 Chunk 4
Logging Data
DiskSuite Software
Trans Metadevice d2
Figure 29
Metadevices
51
52
CHAPTER
This chapter explains hot spare pools. Use the following table to proceed directly to the section that provides the information you need.
4 Overview of Hot Spare Pools and Hot Spares on page 53 4 Hot Spares on page 54 4 Hot Spare Pools on page 54 4 Administering Hot Spare Pools on page 57
53
Hot Spares
A hot spare is a slice (not a metadevice) that is running but not in use. It is reserved, meaning that the hot spare stands ready to substitute for an errored slice in a submirror or RAID5 metadevice. Because slice replacement and the resyncing of failed slices is automatic, hot spares provide protection from hardware failure. The hot spare can be used temporarily until a failed submirror or RAID5 metadevice slice is either xed or replaced. Hot spares remain idle most of the time, and so do not contribute to normal system operation. In addition, slices designated as hot spares cannot be used in any other metadevice, nor can they be used to hold data while idle. You create hot spares within hot spare pools. Individual hot spares can be included in one or more hot spare pools. For example, you may have two submirrors and two hot spares. The hot spares can be arranged as two hot spare pools, with each pool having the two hot spares in a different order of preference. This enables you to specify which hot spare is used rst. It also improves availability by having more hot spares available. You cannot use hot spares within other metadevices, for example within a submirror. They must remain ready for immediate use in the event of a slice failure. A hot spare must be a physical slice. It cannot be a metadevice. In addition, hot spares cannot be used to hold state database replicas. A submirror or RAID5 metadevice can use only a hot spare whose size is equal to or greater than the size of the failed slice in the submirror or RAID5 metadevice. If, for example, you have a submirror made of 1 Gbyte drives, a hot spare for the submirror must be 1 Gbyte or greater.
Note - A prerequisite for hot spares is that the metadevices with which they are
associated have replicated data. When a hot spare takes over, any data on the failed slice must be recreated. For this reason, only mirrors and RAID5 metadevices use hot spares.
Note - You can assign a single hot spare pool to multiple submirrors or RAID5
metadevices. On the other hand, a submirror or a RAID5 metadevice can be associated with only one hot spare pool. When errors occur, DiskSuite checks the hot spare pool for the rst available hot spare whose size is equal to or greater than the size of the slice being replaced. If found, DiskSuite changes the hot spares status to In-Use and automatically resyncs the data. In the case of a mirror, the hot spare is resynced with data from a good submirror. In the case of a RAID5 metadevice, the hot spare is resynced with the other slices in the metadevice. If a slice of adequate size is not found in the list of hot spares, the submirror or RAID5 metadevice that failed goes into an errored state. In the case of the submirror, it no longer replicates the data which that slice represented. In the case of the RAID5 metadevice, data redundancy is no longer available.
55
Mirror d1
Chunk 1 Chunk 2 Chunk 3 Chunk 4
Slice 1
Slice 2
(Metadevice d11)
Figure 31 56
(Metadevice d12)
57
58
CHAPTER
DiskSuite Tool
This chapter provides a high-level overview of DiskSuites graphical user interface, DiskSuite Tool. For information on the command line interface, see the man pages. Use the following table to proceed directly to the section that provides the information you need.
4 Overview of DiskSuite Tool on page 59 4 Screen Descriptions for DiskSuite Tool on page 61 4 Tool Registry on page 102 4 Event Notication on page 102
59
TABLE 41
No
Yes
No No
Yes Yes
Yes
No, but you could use iostat(1M). No, but many functions can be accomplished with the ssaadm(1M) command.
Yes
60
TABLE 42
DiskSuite Tool
61
Menu Bar
Button Panel
Objects List
Templates
Template Object
Message Line
Canvas
Panner
Figure 41
The Metadevice Editor window is the main window for DiskSuite Tool, enabling access to other parts of DiskSuite Tool. The following describes the areas within the Metadevice Editor window.
Note - DiskSuite Tool grays out menu items and user interface elements when you
cannot use them in a specic context.
4 Menu Bar Usually contains ve menus: File, Object, Edit, Browse, and Help. For more information on these menus, see the online help (the section Accessing and Using Help on page 101 describes how to access help.
Note - You can congure DiskSuite Tool to display a Tools menu (see Solstice
DiskSuite 4.2.1 Users Guide, or metatool-toolsmenu(4). The Tools menu can be used to launch other applications, such asAdminSuiteTM Storage Manager, from DiskSuite Tool.
4 Button Panel Contains buttons that display windows, and act on DiskSuite objects.
62
Note - You must select an object before clicking either the Commit button or the Put
Away button.
4 Window Title Bar Displays the window title and the name of the system upon which DiskSuite Tool is currently running. Also displays diskset information, either <local>, for a local diskset, or the name of a shared diskset. 4 Objects List Contains metadevices, hot spare pools, and the metadevice state database object.
You can select and drag objects in the Objects List to the canvas. Or you can double-click an object in the Objects List to display it on the canvas. Colored objects indicate a problem:
4
Red=Critical 4 Orange=Urgent 4 Yellow=Attention Gray scale monitors display problem status information in gray scales. On monochrome monitors, you must horizontally scroll the device list to view the status associated with the objects.
4 Objects List Filter Button Enables you to lter the information that the Objects List displays. You can lter by:
4
4 Templates Contains template icons, on the left side of the Metadevice Editor window. For descriptions of the template icons, see the online help.
The template icons are sources for empty DiskSuite objects (templates). Once you have a template displayed on the canvas, you can then build metadevices from it by dropping slices or other metadevices into it. To work with a template, you can either single-click it or drag it to the canvas.
4 Template Object Acts as a template for a DiskSuite object, such as a concatenation. 4 Message Line Area Displays messages about specic elements on the canvas.
When you place the cursor over an area of the Metadevice Editor window, the message line displays a message about that area.
63
You can drag DiskSuite objects from the Disk View window, the Objects list, and the Templates to the canvas. Clicking an object on the canvas selects the object.
4 Panner Shows the current view in the canvas. (See Figure 42.)
Black rectangle shows objects you are currently viewing To change the view to "hidden" objects, point to the rectangle and click the SELECT button
Figure 42
Panner
Pointing inside the Panner and clicking the SELECT button changes the current view. You can also point to the black rectangle, press and hold down the ADJUST button, and drag the view area to a new location.
64
Menu Bar
Controllers List
Drop site
Message line
Legend
Set Filters
Figure 43
4 Menu Bar Usually contains four menus: File, Object, View, and Help. For more information on these menus, see the online help (the section Accessing and Using Help on page 101 describes how to access help.)
Note - You can congure DiskSuite Tool to display a Tools menu (see Solstice
DiskSuite 4.2.1 Users Guide, or metatool-toolsmenu(4). The Tools menu can be used to launch other applications, such as Solstice Storage Manager, from DiskSuite Tool.
DiskSuite Tool
65
Device mappings are displayed after the device is dropped in the drop site. Colored slices are used by by one or more devices listed in the corresponding drop site.
Figure 44
Dropping a metadevice object onto a color drop site assigns a color to that metadevice object. The color, in turn, shows up on the Disk View window canvas, enabling you to see physical-to-logical device relations. Each drop site has a pop-up menu that contains:
4 Info Displays the Information window for the object. 4 Clear Sets the color drop site to Unassigned.
You can change the colors for each of the eight color drop sites. Edit the X resource le, /usr/lib/lvm/X11/app-defaults/Metatool. It contains a list of all the X resources used by metatool(1M). See Solstice DiskSuite 4.2.1 Users Guide for more information on editing this le. A monochrome monitor will show only one drop site, black.
4 Disk View Canvas Displays the physical devices and mappings on the canvas.
To select a disk on the Disk View canvas, click the top of the disk. To select a slice, click inside the slice rectangle. You can drag the object, whether selected or not, to a template on the Metadevice Editor canvas and add or replace slices in that template. The canvas is also a destination for drag and drop. When devices are dropped on the canvas from the Metadevice Editor window, they take on the next available color. If all drop sites are in use, a window is displayed that enables you to select a drop site. Also, if any object is selected on the editor canvas and the Disk View window is invoked, the objects will automatically take on the color of the next available drop site. The graphical representations of objects on the Disk View canvas are shown in Figure 45.
66
Figure 45
4 Legend On color systems, contains eight color drop sites that provide color cues for mappings. Each color can be hidden or exposed using the toggle button to the left of each color box. On monochrome systems, only one drop site is available, which is black.
The legend region of the Disk View window can be turned on and off by choosing Show Legend from the View menu.
4 Disk View Panner Shows the current view in the canvas. See Figure 46.
Objects you are currently viewing To change the view to hidden objects, point to the rectangle and click the SELECT button
Figure 46
Pointing inside the Disk View Panner and clicking the SELECT button changes the current view. You can also point to the black rectangle, press and hold down the ADJUST button, and drag the view area to a new location.
DiskSuite Tool
67
Legend
Figure 47
4 Menu Bar Contains two menus titled File and All Graphs. For more information on these menus, see the online help (the section Accessing and Using Help on page 101 describes how to access help). 4 Canvas Shows instantaneous statistics, and has toggle buttons for controlling the information displayed. 4 Legend Contains a legend for all the graphs.
When you add a device to the Grapher window, a button bar appears. If you continue to add devices on the canvas, they appear in individual rows with a control area and graph. Figure 48 shows the Grapher window with a metadevice. An explanation of the buttons follows.
68
Info Button
Pause/Resume Button
Arrows
Figure 48
4 Collapse Toggle Button Collapses a canvas row. 4 Info Button Displays the devices Information window. 4 Put Away Button Removes the device from the Grapher window. 4 Pause/Resume Button Suspends updates to the Grapher window (Pause), or alternately, resumes updates (Continue). 4 Arrows Reorder rows.
Information Windows
Several information windows are present in DiskSuite Tool. These information windows include:
4 Disk Information Window (see Disk Information Window on page 70) 4 Slice Information Window (see Slice Information Window on page 72) 4 Device Statistics Sheet (see Device Statistics Window on page 74) 4 Concat Information Window (see Concat Information Window on page 75) 4 Stripe Information Window (see Stripe Information Window on page 77) 4 Mirror Information Window (see Mirror Information Window on page 79) 4 Trans Information Window (see Trans Information Window on page 82) 4 Hot Spare Information Window (see Hot Spare Information Window on page 84)
DiskSuite Tool
69
4 RAID Information Window (see RAID Information Window on page 86) 4 Metadevice State Database Information Window (see Metadevice State Database Info Window on page 88) 4 Tray Information Window (Tray Information Window on page 90) 4 Controller Information Window (Controller Information Window on page 91)
Figure 49
TABLE 43
Type
In Use
Capacity
Unallocated Size
Start
Stop
DiskSuite Tool
71
TABLE 43
(continued)
A toggle button that expands and collapses the slice view. The number of non-zero size slices on the disk is shown in parentheses on the button. A button that brings up the Slice Information window for each selected slice. Point to the slice area and click the SELECT button to select a slice. To select multiple slices, either press and hold down the Control key while pointing to the slices and clicking the SELECT button or hold down the SELECT button and drag the cursor over slices.
Slice Information
Table 44 lists additional functionality that appears for SPARCstorage Array disks.
Disk Information Screen, SPARCstorage Array Functionality
Functions Displays the vendor name. Displays the product identication number. Displays the product rmware revision information. Radio buttons that enable fast writes or synchronous fast writes, or disable fast writes.
TABLE 44
4 Select a slice on the Disk Information window by pointing to it and pressing the SELECT button. Then click the Slice Information button. 4 Point to a slice of a disk that is displayed on the Disk View windows canvas. Press and hold down the MENU button to display the pop-up menu for the slice then select the Info option. 4 Point to a slice inside any metadevice displayed on the Metadevice Editors canvas. Press and hold down the MENU button to display the pop-up menu for the slice then select the Info option.
72
Solstice DiskSuite 4.2.1 Reference Guide February 2000
Figure 410
This button enables the slice. The button is available only if the data on the slice is replicated in a mirror or RAID5 metadevice, or if the slice is used as a hot spare that is currently broken.
DiskSuite Tool
73
TABLE 45
(continued)
Opens the Disk Information window. Displays the Physical to Logical Device Mappings window. (The Physical to Logical Device Mappings window is not dynamically updated when new mappings are created.)
4 Select a metadevice on the Metadevice Editor windows canvas, or a disk on the Disk View canvas, by pointing to it and pressing the SELECT button. Select Statistics from the Object menu. 4 Point to a metadevice displayed on the Metadevice Editor windows canvas, or a disk displayed on the Disk View canvas. Press and hold down the MENU button to display the pop-up menu for the metadevice or disk then select the Statistics option.
Figure 411
74
TABLE 46
Derived Values
4 Double-click the Concat/Stripe object in the Objects list. The Concat/Stripe object is opened on the Metadevice Editors canvas. Select Info from the Objects menu. 4 If the Concat/Stripe object is on the Metadevice Editors canvas, point inside the template. Press and hold down the MENU button to display the pop-up menu for the concatenation then select the Info option. 4 If the Concat/Stripe object is on the Metadevice Editors canvas, point inside the top rectangle of the object and double-click.
DiskSuite Tool
75
Figure 412
Table 47 lists the functionality associated with the regions of the Concat Information window.
TABLE 47
76
TABLE 47
(continued)
The entry eld for specifying the name of a Hot Spare Pool to be associated with the concatenation. To attach a hot spare pool enter the name in the eld and click on the Attach button. The Hot Spare Pool Information window is displayed when you enter a hot spare pool name and click on the Info button. This toggle button enables you to turn on and off the stripe manipulation region. The number of stripes in the concatenation are shown in parentheses on the button. The following functionality is available in this region: 4 List of stripes Provides the size and status of each stripe included in the concatenation. 4 Attach Attaches a new and empty stripe to the concatenation. 4 Remove Removes the selected stripe from the concatenation. 4 Info Brings up the Stripe Information window for the selected (highlighted) stripes.
Show Stripes
4 Double-click the Concat/Stripe object in the Objects list. The Concat/Stripe object is opened on the Metadevice Editors canvas. Point to the stripe rectangle. Select Info from the Objects menu. 4 If the Concat/Stripe object is on the Metadevice Editors canvas, point inside the stripe rectangle of the Concat/Stripe object and double-click. 4 If the Concat/Stripe object is on the Metadevice Editors canvas, point inside the stripe rectangle. Press and hold down the MENU button to display the pop-up menu then select the Info option.
DiskSuite Tool
77
Figure 413
Table 48 lists the functionality associated with the regions of the Stripe Information window.
TABLE 48
78
TABLE 48
(continued)
The default interlace value is 16 Kbytes. To change the interlace value, click the Custom button and type the value in the eld. The menu button to the right of the eld enables you to specify the units used. The values on the menu are Gbytes, Mbytes, Kbytes, and Sectors. The default is Kbytes. After the Custom eld is complete, the Attach button is used to assign the interlace value to the stripe. After a stripe is committed, the interlace value cannot be changed. This toggle button enables you to turn on and off the slice manipulation region. The number of slices in the stripe are shown in parentheses on the button. The following functionality is available in this region: 4 Scrolling List Shows slices included in the stripe. The information in this region includes the name of the slice, size, number of state database replicas on the slice, and the status. 4 Enable Enables the selected slices if they are disabled. 4 Remove Removes the selected slices. 4 Slice Species a new slice to be attached to the stripe or replaces the selected slice. If no slice is selected, the button is unavailable. 4 Attach Attaches the slice specied in the Slice eld to the stripe. This button is active only when a slice name is entered in the eld. 4 Replace Replaces the selected stripe with the slice entered in the Slice eld. This button is active only when a slice name has been entered in the eld and a slice is selected on the scrolling list. 4 Info Displays the Slice Information window for the selected (highlighted) slice.
Show Slices
4 Double-click the mirror object in the Objects list. The mirror is opened on the Metadevice Editors canvas. Select Info from the Objects menu. 4 If the mirror object is on the Metadevice Editors canvas, point inside the mirror rectangle. Press and hold down the MENU button to display the pop-up menu then select the Info option. 4 If the mirror object is on the Metadevice Editors canvas, double-click inside the mirror rectangle.
79
DiskSuite Tool
Figure 414
The Mirror object must be committed before the policy changes take effect. Table 49 lists the functionality associated with the regions of the Mirror Information window.
TABLE 49
Status Size
80
TABLE 49
(continued)
Field Use
Shows how the mirror is currently used, for example, le system, swap, or shared log. If the use is shared log, a button labeled Show Trans is displayed. The Show Trans button opens a Sharing Information window that shows the Trans devices that share the Mirror. This toggle button enables you to turn on and off the submirror manipulation region. The number of submirrors in the mirror are shown in parentheses on the button. A pass number in the range 0-9 can be assigned to a mirror using the Pass button menu. The pass (resync) number determines the order in which that mirror is resynced during a system reboot. The default is 1. Smaller pass numbers are resynced rst. If 0 is chosen, the resync is skipped. A 0 should only be used for mirrors mounted as read-only. If different mirrors have the same pass number, they are resynced concurrently. There are three kinds of read options associated with mirrors: Round Robin, Geometric, and First. The default read option is Round Robin, also called balanced load. When set to Round Robin, all reads are made in a round robin order from all the submirrors in the mirror. That is, the rst read comes from the rst submirror, the next read comes from the second submirror, and so forth. The Geometric option provides faster performance on sequential reads or when you are using disks with track buffering. Geometric reads allow read operations to be divided among submirrors on the basis of a logical disk block address. For instance, with a three-way mirror the disk space on the mirror is divided into three (equally sized) logical address ranges. Reads from the three regions are then performed by separate submirrors (for example, reads to the rst region are performed by the rst submirror). The First option species reading from only the rst submirror. This would be specied only if you have a second submirror that has poor read I/O characteristics.
Show Submirrors
Pass
Read Option
DiskSuite Tool
81
TABLE 49
(continued)
A button that enables you to set parallel or serial writes to the submirror. Parallel writes are the default action of the metadisk driver, meaning the writes are dispatched to all submirrors simultaneously. Serial writes specify that writes to one submirror must complete before the next submirror write is started.
The following functionality is available in this region: 4 Show Submirrors This toggle button enables showing or hiding the list of submirrors. 4 Scrolling List Shows submirrors included in the mirror. The information in this region includes the name, type, size, and status. Click on the submirror to select it. When submirrors are selected, actions can be performed on them. 4 Online Brings selected submirrors back online. This button is active only when the selected submirror is ofine. 4 Ofine Takes selected submirrors ofine. This button is active only when the selected submirror is online. 4 Remove Detaches the selected submirrors. 4 Info Opens the Concat Information window for the selected submirror. 4 Device Species a new submirror in the eld to attach or replace. The eld is cleared when you click on the Attach or Replace buttons. 4 Attach Adds the specied submirror. This button is active only when a submirror or device is entered in the Device eld. 4 Replace Replaces the selected submirror with the submirror entered in the eld. This button is active only when a submirror or device is entered in the eld and one in the list is selected.
4 Double-click the Trans object in the Objects list. The object is opened on the Metadevice Editors canvas. Select Info from the Objects menu. 4 If the Trans Metadevice object is on the Metadevice Editors canvas, point inside the Trans rectangle. Press and hold down the MENU button to display the pop-up menu then select the Info choice.
82
Solstice DiskSuite 4.2.1 Reference Guide February 2000
4 If the Trans Metadevice object is on the Metadevice Editors canvas, point inside the Trans rectangle and double-click.
Figure 415
The Trans object must be committed before the changes take effect. Table 410 lists the functionality associated with the regions of the Trans Information window.
TABLE 410
DiskSuite Tool
83
TABLE 410
(continued)
A region that contains the device name of the master device. The Attach button toggles between Attach and Remove. Other information in the region includes: 4 Type The type of device used as the master. 4 Status Shows the description of the masters status. 4 Size Displays the size of the master device. 4 Info Displays the information form for the master device. A region that contains the device name where the log device is located. The Remove button toggles between Attach and Remove. Other information in the region includes: 4 Type The type of device used as the log. 4 Status Shows the description of the logs status. 4 Size Displays the size of the log device. 4 Info Displays the information form for the log device.
4 Double-click the Hot Spare Pool in the Objects list. The hot spare pool object is opened on the Metadevice Editors canvas. Select Info from the Object menu. 4 If the Hot Spare Pool object is on the Metadevice Editors canvas, point inside the top of the Hot Spare Pool rectangle. Press and hold the MENU button to display the pop-up menu then select the Info option. 4 If the Hot Spare Pool object is on the Metadevice Editors canvas, point inside the top of the Hot Spare Pool rectangle and double-click.
84
Figure 416
The Hot Spare Pool object must be committed before the changes take effect. Table 411 lists the functionality associated with the regions of the Hot Spare Pool Information window.
TABLE 411
DiskSuite Tool
85
TABLE 411
(continued)
The size of the largest slice in the Hot Spare Pool. A scrolling list that displays the device names, types, and status of all metadevices associated with the Hot Spare Pool. To display information about the object either click the object then click Info or point to the object and double-click. Displays the Concatenation Information window for the selected (highlighted) Concat/Stripe in the Associated With region. Contains a list of all the slices included in the Hot Spare Pool. New slices can be added. Existing slices can be manipulated. The functions of the buttons include: 4 Show Hot Spare A toggle button that shows or hides the bottom portion of the window. 4 List of slices A scrolling list of the slices included in the Hot Spare Pool. 4 Enable Enables selected slices that are disabled. 4 Remove Removes the selected slices from the Hot Spare Pool. 4 Info Displays the Slice Information window for the selected (highlighted) slice. 4 Slice Species a new slice to attach or replace the selected slice. 4 Attach Attaches the slice specied in the Slice eld to the Hot Spare Pool. This button is active only when a slice name has been entered in the eld. 4 Replace Replaces the selected spare slice with the slice entered in the eld. This button is active only when a slice name has been entered in the eld and a slice is selected on the list of slices.
Info
4 Double-click the RAID5 metadevice in the Objects list. The RAID5 metadevice is opened on the Metadevice Editors canvas. Select Info from the Object menu. 4 If the RAID5 metadevice is on the Metadevice Editors canvas, point inside the top of the rectangle. Press and hold the MENU button to display the pop-up menu then select the Info choice. 4 If the RAID5 metadevice is on the Metadevice Editors canvas, point inside the top of the rectangle and double-click.
86
Figure 417
The RAID5 metadevice must be committed before the changes take effect. Table 412 lists the functionality associated with the regions of the RAID Information window.
TABLE 412
Status Size
DiskSuite Tool
87
TABLE 412
(continued)
Field Use
The use of the RAID5 metadevice, for example, le system or swap. If the use of the RAID5 metadevice is a Trans Log, a Show Trans button is positioned to the right of the eld. This eld enables assigning a Hot Spare Pool to the RAID5 metadevice. It has the following functions: 4 Attach/Detach Attaches or detaches the specied Hot Spare Pool to the RAID5 metadevice. 4 Info Displays the Hot Spare Pool Information window for the specied Hot Spare Pool. The default interlace value is 16 Kbytes. To change the interlace value, click on the Custom button and type the value in the eld. The menu button to the right of the eld enables you to specify the units used. The values on the menu are Gbytes, Mbytes, Kbytes, and Sectors. The default is Kbytes. After the Custom eld is complete, the Attach button is used to assign the interlace value to the RAID5 metadevice. After a RAID5 metadevice is committed, the interlace value cannot be changed. The following functionality is available in this region:
4 Show Slices A toggle button that shows or hides the scrolling list 4 Scrolling List A list of the slices included in the RAID5
metadevice. The information in this region includes the name of the slice, size, number of state database replicas on the slice and the status. Enable Enables the selected slices if they are disabled. Remove Removes the selected slices. Slice Species a new slice to attach to the RAID5 metadevice or replaces the selected slice. Attach Attaches the slice specied in the Slice eld to the RAID5 metadevice. This button is active only when a slice name is entered in the eld. Replace Replaces the selected RAID5 slice with the slice entered in the Slice eld. This button is active only when a slice name has been entered in the eld and a slice is selected from the scrolling list. Info Displays the Slice Information window for the selected (highlighted) slice. of components at the bottom of the window.
4 4 4 4 4
4 Double-click the MetaDB object in the Objects list. The MetaDB object is opened on the Metadevice Editors canvas. Select Info from the Object menu. 4 If the MetaDB object is on the Metadevice Editors canvas, point inside the top of the rectangle. Press and hold the MENU button to display the pop-up menu then select the Info choice. 4 If the MetaDB object is on the Metadevice Editors canvas, point inside the top of the rectangle and double-click.
Figure 418
The MetaDB object must be committed before the changes take effect. Table 413 lists the functionality associated with the regions of the Metadevice State Database Information window.
DiskSuite Tool
89
TABLE 413
90
Figure 419
Table 414 lists the functionality associated with the Tray Information window.
TABLE 414
Selecting a disk in the disk information pane and clicking the Info button displays the Disk Information window for that disks.
DiskSuite Tool
91
controller on the Disk View canvas. Press and hold the MENU button to display the pop-up menu then select the Info option.
Figure 420
Table 415 lists the functionality associated with the Controller Information window.
TABLE 415
92
TABLE 416
Field Fan Status Battery Status Vendor Product ID Product Rev Firmware Rev
Browsers
Three browsers can be accessed from the Browse menu on the Metadevice Editor window. These include:
DiskSuite Tool
93
Menu bar
Device list
Figure 421
The Slice, Metadevice, and Hot Spare Pool browsers all have the same window title bar and choices on the menu bar. The File menu enables you to exit the browser. The Filters menu enables you to set the lters and turn them on and off. The View menu enables you to change the order in which information is displayed in the device list. However, there are some subtle differences in the dialog boxes used to set the lters. The device list varies in the following ways:
4 Slice Browser Device List To view additional information about the slices listed here, point to a slice and double-click the SELECT button. The Slice Information window displays information about the slice and provides access to the Disk Information and Associations windows. The Slice Browser device list contains the information shown in Table 417.
TABLE 417
Field
Device Name
94
TABLE 417
(continued)
Field Status
Reported as OK, Resyncing, Enabled, Critical, Spared, Urgent, or Attention. Contains one of the following values: Unassigned, Trans Log, Trans Master, MetaDB Replica, Component, File System currently mounted on slice, Overlap, or Hot Spare.
Use
4 Metadevice Browser Device List To view additional information about the metadevices listed, point to a metadevice and double-click the SELECT button. An information window is displayed. The Metadevice Browser device list contains the information shown in Table 418.
TABLE 418
Field Name
Type
4 Hot Spare Pool Device List To view additional information about the hot spare pools listed, point to a hot spare pool and double-click the SELECT mouse button. The Hot Spare Information window is displayed, showing a list of the metadevices that have an association with the hot spare pool. It also shows information about the disks in the pool. The Hot Spare Pool device list contains the information shown in Table 419.
DiskSuite Tool
95
TABLE 419
Field Name
Status
96
Figure 422
DiskSuite Tool
97
TABLE 420
(continued)
Turning on the Disk Type toggle button enables you to select the types of disks you wish to have displayed in the browser. The menu always enables you to select Any, but the other selections depend on the types of disks attached to your system. Searches only for slices that have a broken status.
The Finder
The Finder is used to locate an object in the Metadevice Editor Window, or to locate the device associated with a specied mount point. The Finder is accessed from the Browse menu on the Metadevice Editor window.
4 To locate an object inside the Metadevice Editor window, select the Find choice and either type the device name, or click the radio button beside Mount Point and type the mount point to nd (see Figure 423). If the object is anywhere on the canvas, it is placed in the upper left corner. The object will become the current selection (any previously selected objects will be deselected.) If the object is in the Device List, it is opened and placed in the upper left corner of the canvas. The text elds are not case sensitive. Wildcard character support includes both the asterisk (*) and question mark (?). The asterisk matches zero or more characters and the question mark matches one character.
Figure 423
Finder Window
Dialog Boxes
DiskSuite Tool displays feedback via four different types of dialog boxes at various times. You must respond to a dialog box before you can perform any other action in DiskSuite Tool. 98
Solstice DiskSuite 4.2.1 Reference Guide February 2000
Caution - Read and understand the dialog boxes before responding. You can
inadvertently lose data. An example of a warning dialog box is shown in Figure 424.
Figure 424
The types of dialog boxes and the information they display are shown in Table 421.
TABLE 421
Dialog Boxes
Information Presented When you attempt to perform an action that will result in an error, an error dialog box appears with a notication of the error. When you attempt to perform an action that results in a warning, you are given the opportunity to cancel the action. Appendix A offers a listing of the error messages and the corrective action. These provide a way for you to conrm an action that has been selected. These will appear when an action you initiated cannot be undone. The message string in each dialog varies according to the operation. These provide a helpful message. These dialog boxes appear with a large i on the left side of the message.
Type Error
Warning
Conrmation
Information
DiskSuite Tool
99
Figure 425
Selections on the Conguration Log windows File menu enable you to clear the scrolling list, log the messages to a user-designated le, and close the window. Double-clicking an entry in the list brings up the information dialog window for the device and opens the device on the Metadevice Editors canvas.
Selections on the Problem List windows File menu enable you to log the messages to a user-designated le and close the window. The text eld on the right side of the button displays the date and time of the most recent update. Double-clicking an entry in the list brings up the information window for the device and places the device on the Metadevice Editors canvas.
Note - When DiskSuite Tool is minimized, its icon ashes when there is a critical
problem.
4 To access online help, click Help on the menu bar. Then select either On Help or On Window from the menu. 4 To access the online help from within a window, click the Help button.
The DiskSuite Tool help utility is shown in Figure 427.
Figure 427
The Help titles displayed in the top window pane identify the list of subjects available for each level of help. The text in the bottom window pane describes information about using the current menu or command. Use the scrollbars to the right of each pane to scroll through the help information displayed. On the left side of the Help utility are buttons used to nd information and navigate through the help system. The buttons are described in Table 422.
DiskSuite Tool
101
TABLE 422
Done
Tool Registry
This is an application registry le used by DiskSuite Tool to initialize its Tools menu selection. Refer to the metatool-toolsmenu(4) man page for more information.
Event Notication
Event notication is a feature that keeps you aware of dynamic state changes, such as creation of a metadevice, a change in a metadevice status, or device errors. Event notication takes care of the following:
4 More than one administrator at a time, if necessary, can run DiskSuite Tool on the same host with the assurance that state changes are propagated to each instance of DiskSuite Tool. 4 When running multiple instances of DiskSuite Tool on the same host, event notication ensures that proper locking occurs to prevent one instance of DiskSuite Tool from overwriting the changes made by another. When one DiskSuite Tool has an uncommitted action, it has a lock until a commit occurs or the device is removed.
Note - Though you can run multiple instances of DiskSuite Tool on the same host, it
is best to avoid doing so.
102
4 You can run both DiskSuite Tool and the command line utilities together. Event notication is able to pass state changes from the command line to DiskSuite Tool.
Note - DiskSuite Tool provides the same functionality as the ssaadm(1M) command
to start and stop a disk. However, do not use DiskSuite Tool and the ssaadm(1M) together. Doing so could cause DiskSuite Tool to incorrectly display a disks status. Always use one or the other to both stop and start a disk.
DiskSuite Tool
103
104
CHAPTER
Disksets
This chapter explains shared disksets. Use the following table to proceed directly to the section that provides the information you need.
4 What Do Disksets Do? on page 105 4 How Does DiskSuite Manage Disksets? on page 105 4 Diskset Conventions on page 106 4 Administering Disksets on page 108
105
Note - Disksets are intended for use with Solstice HA, or another supported
third-party HA framework. DiskSuite by itself does not provide all the functionality necessary to implement a failover conguration. In addition to the shared diskset, each host has a local diskset. This consists of all of the disks on a host not in a shared diskset. A local diskset belongs to a specic host. The local diskset contains the metadevice state database for that specic hosts conguration. Metadevices and hot spare pools in a shared diskset can only consist of drives from within that diskset. Once you have created a metadevice within the diskset, you can use it just as you would a physical slice. However, disksets do not support the mounting of le systems from the /etc/vfstab le. Similarly, metadevices and hot spare pools in the local diskset can only consist of drives from within the local diskset. When you add disks to a diskset, DiskSuite automatically creates the state database replicas on the diskset. When a drive is accepted into a diskset, DiskSuite repartitions it so that the state database replica for the diskset can be placed on the drive. Drives are repartitioned when they are added to a diskset only if Slice 7 is not set up correctly. A small portion of each drive is reserved in Slice 7 for use by DiskSuite. The remainder of the space on each drive is placed into Slice 0. Any existing data on the disks is lost by repartitioning. After adding a drive to a diskset, it may be repartitioned as necessary, with the exception that Slice 7 is not altered in any way. Unlike local diskset administration, you do not need to create or delete diskset metadevice state databases by hand. DiskSuite tries to place a reasonable number of state database replicas (on Slice 7) on across all drives in the diskset. If necessary, however, you can manually administer the replicas. (See Solstice DiskSuite 4.2.1 Users Guide)
Note - Disksets are not intended for local (not dual-connected) use.
Diskset Conventions
4 What are the diskset naming conventions?
Disksets use this naming convention: /dev/md/setname Metadevices within the shared diskset use these naming conventions: /dev/md/setname/{dsk | rdsk}/dnumber where setname is the name of the diskset, and number is the metadevice number (usually between 0-127). 106
Solstice DiskSuite 4.2.1 Reference Guide February 2000
Hot spare pools use setname/hspxxx, where xxx is in the range 000-999. Metadevices within the local diskset have the standard DiskSuite metadevice naming conventions. (See Table 14.)
4 What are the requirements for shared disk drive device names?
A shared disk drive must be seen on both hosts at the same device number (c#t#d#). The disk drive must also have the same major/minor number. If the minor numbers are not the same on both hosts, typically you see the message drive c#t#d# is not common with host xxxx when adding drives to the diskset. Finally, the shared disks must use the same driver name (ssd). See Solstice DiskSuite 4.2.1 Users Guide for more information on setting up shared disk drives in a diskset.
4 Can a le system that resides on a metadevice in a diskset be mounted automatically at boot via the /etc/vfstab le?
No. The necessary diskset RPC daemons (rpc.metad and rpc.metamhd) do not start early enough in the boot process to permit this. Additionally, the ownership of a diskset is lost during a reboot.
107
Disk 4
Disk 2
Disk 5
Disk 3
Disk 1
Disk 4
Disk 2
Shared Diskset B
Disk 5
Disk 3
Figure 51
Disksets Example
In this conguration, Host A and Host B share disksets A and B. They each have their own local diskset, which is not shared. If Host A fails, Host B can take over control of Host As shared diskset (Diskset A). Likewise, if Host B fails, Host A can take control of Host Bs shared diskset (Diskset B).
Administering Disksets
Disksets must be created and congured using the DiskSuite command line interface (the metaset(1M) command). After you have created a diskset, you can administer state database replicas, metadevices, and hot spare pools within a diskset using either DiskSuite Tool or the command line utilities. After drives are added to a diskset, the diskset can be reserved (or taken) and released by hosts in the diskset. When a diskset is reserved by a host, the other host in the diskset cannot access the data on the drives in the diskset. To perform maintenance on a diskset, a host must be the owner of the diskset or have reserved the diskset. A host takes implicit ownership of the diskset by putting the rst drives into the set. The SCSI reserve command is issued to each drive in the diskset to reserve it for exclusive use by the current host. Each drive in the diskset is probed once every second to determine that it is still reserved.
108
Note - If a drive has been determined unexpectedly not to be reserved, the host will
panic. This behavior helps to minimize data loss which would occur if two hosts were to simultaneously access the same drive.
Reserving a Diskset
Before a host can use drives in a diskset, the host must reserve the diskset. There are two methods of reserving a diskset:
4 Safely - When you safely reserve a diskset, DiskSuite checks to see if another host currently has the set reserved. If another host has the diskset reserved, your host will not be allowed to reserve the set. 4 Forcibly - When you forcibly reserve a diskset, DiskSuite reserves the diskset whether or not another host currently has the set reserved. This method is generally used when a host in the diskset is down or not communicating. All disks within the set are taken over and FailFast is enabled. The metadevice state database is read in on the host performing the reservation and the shared metadevices congured in the set become accessible. If the other host had the diskset reserved at this point, it would panic due to reservation loss.
Normally, two hosts in a diskset cooperate with each other to ensure that drives in a diskset are reserved by only one host at a time. A normal situation is dened as both hosts up and communicating with each other.
Releasing a Diskset
Sometimes it may be desirable to release a diskset. Releasing a diskset can be useful when performing maintenance on the drives in the set. When a diskset is released, it cannot be accessed by the host. If both hosts in a diskset release the set, neither host in the diskset can access the drives in the set.
Disksets
109
110
CHAPTER
This chapter describes how to use the /ect/lvm/md.tab le. It also explains the purpose of the /ect/lvm/md.cf le. Use the following table to locate specic information in this chapter.
4 Overview of the md.tab File on page 111 4 Creating Initial State Database Replicas in the md.tab File on page 112 4 Creating a Striped Metadevice in the md.tab File on page 112 4 Creating a Concatenated Metadevice in the md.tab File on page 113 4 Creating a Concatenated Stripe in the md.tab File on page 113 4 Creating a Mirror in the md.tab File on page 114 4 Creating a Trans Metadevice in the md.tab File on page 114 4 Creating a RAID5 Metadevice in the md.tab File on page 115 4 Creating a Hot Spare Pool in the md.tab File on page 115 4 Overview of the md.cf File on page 116
When you edit the /ect/lvm/md.tab le, you specify one complete conguration entry for each line using the syntax of the metainit(1M), metadb(1M), and metahs(1M) commands. You then run the metainit(1M) command with either the a option, to activate all metadevices in the /ect/lvm/md.tab le, or with the metadevice name that corresponds to a specic entry in the le.
This le entry creates three state database replicas on each of the three slices. mddb01 identies the metadevice state database. c 3 species that three state database replicas are placed on each slice. The metadb(1M) command activates this entry in the /ect/lvm/md.tab le.
112
The number 1 species to create a single stripe (a striped metadevice consisting of one stripe). The number 2 species how many slices to stripe. The i 32k species a 32 Kbytes interlace value. (The default interlace value is 16 Kbytes.)
The number 4 species to create four striped slices in the concatenated metadevice. Each stripe is made of one slice; therefore, you specify the number 1 for each slice.
Note - The rst disk sector in the concatenated metadevice contains a disk label. To
preserve the disk labels on devices /dev/dsk/c0t2d0s0, /dev/dsk/c0t3d0s0, and /dev/dsk/c0t4d0s0,DiskSuite skips the entire rst cylinder of these disks. This permits higher-level le system software to optimize block allocations correctly.
# # (concatenation of two stripes, each made of three disks) # d75 2 3 c0t1d0s2 c0t2d0s2 c0t3d0s2 -i 16k \ 3 c1t1d0s2 c1t2d0s2 c1t3d0s2 -i 32k
The i 16k species a 16 Kbytes interlace value for the rst stripe. The i 32k species a 32 Kbyte for the second stripe. The address blocks for each set of three disks are interlaced across three disks.
The m creates a one-way mirror consisting of submirror d51. The other two submirrors, d52 and d53, must be attached later with the metattach(1M) command. The default read and write options in this example are a round-robin read policy and parallel write policy.
114
# # (trans) # d1 -t d10 d20 d10 -m d11 d11 1 1 c0t1d0s2 d12 1 1 c0t2d0s2 d20 -m d21 d21 1 1 c1t1d0s2 d22 1 1 c1t2d0s2
The m creates the two one-way mirrors, d10 and d20. The t creates d10 as the master device and d20 as the logging device. The submirrors d12 and d22 are attached later by using the metattach(1M) command on the d10 and d20 mirrors.
The r creates the RAID5 metadevice. The i species an interlace value of 20 Kbytes. DiskSuite will stripe the data and parity information across the slices c0t1d0s1, c1t0d0s1, and c2t0d0s1. If you wanted to concatenate more slices to the original RAID5 metadevice, you could do so later by using the metattach(1M) command.
# # (mirror and hot spare) # d10 -m d20 d20 1 1 c1t0d0s2 -h hsp001 d30 1 1 c2t0d0s2 -h hsp002 d40 1 1 c3t0d0s2 -h hsp003 hsp001 c2t2d0s2 c3t2d0s2 c1t2d0s2 hsp002 c3t2d0s2 c1t2d0s2 c2t2d0s2 hsp003 c1t2d0s2 c2t2d0s2 c3t2d0s2
The m creates a one-way mirror consisting of submirror d20. You attach the other submirrors, d30 and d40, later by using the metattach(1M) command. The h species which hot spare pools belong to the submirrors. There are three disks used as hot spares, each associated with three separate hot spare pools, hsp001, hsp002, and hsp003.
Note - The /ect/lvm/md.tab le can be used to both create and associate hot
spare pools with metadevices at the same time.
116
CHAPTER
Conguration Guidelines
Introduction
This appendix describes some ways to set up your conguration. Use the following table to locate specic information in this chapter.
4 Conguration Planning Overview on page 117 4 Conguration Planning Guidelines on page 118 4 RAID5 Metadevices and Striped Metadevices on page 122 4 Optimizing for Random I/O and Sequential I/O on page 123 4 Striping Trade-offs on page 125 4 Logging Device Trade-offs on page 126 4 State Database Replicas on page 127
117
Striping generally has the best performance, but it offers no data protection. For write intensive applications, mirroring generally has better performance than RAID5.
Concatenation Guidelines
4 A concatenated metadevice uses less CPU time than striping. 4 Concatenation works well for small random I/O. 4 Avoid using physical disks with different disk geometries.
Disk geometry refers to how sectors and tracks are organized for each cylinder in a disk drive. The UFS organizes itself to use disk geometry efciently. If slices in a concatenated metadevice have different disk geometries, DiskSuite uses the geometry of the rst slice. This fact may decrease the UFS le system efciency.
Note - Disk geometry differences do not matter with disks that use Zone Bit
Recording (ZBR), because the amount of data on any given cylinder varies with the distance from the spindle. Most disks now use ZBR.
4 When constructing a concatenation, distribute slices across different controllers and busses. Cross-controller and cross-bus slice distribution can help balance the overall I/O load.
Striping Guidelines
4 Set the stripes interlace value correctly.
118
Solstice DiskSuite 4.2.1 Reference Guide February 2000
4 The more physical disks in a striped metadevice, the greater the I/O performance. (The MTBF, however, will be reduced, so consider mirroring striped metadevices.) 4 Dont mix differently sized slices in the striped metadevice. A striped metadevices size is limited by its smallest slice. 4 Avoid using physical disks with different disk geometries. 4 Distribute the striped metadevice across different controllers and busses. 4 Striping cannot be used to encapsulate existing le systems. 4 Striping performs well for large sequential I/O and for random I/O distributions. 4 Striping uses more CPU cycles than concatenation. However, it is usually worth it. 4 Striping does not provide any redundancy of data.
Mirroring Guidelines
4 Mirroring may improve read performance; write performance is always degraded. 4 Mirroring improves read performance only in threaded or asynchronous I/O situations; if there is just a single thread reading from the metadevice, performance will not improve. 4 Mirroring degrades write performance by about 15-50 percent, because two copies of the data must be written to disk to complete a single logical write. If an application is write intensive, mirroring will degrade overall performance. However, the write degradation with mirroring is substantially less than the typical RAID5 write penalty (which can be as much as 70 percent). Refer to Figure 71.
Note that the UNIX operating system implements a le system cache. Since read requests frequently can be satised from this cache, the read/write ratio for physical I/O through the le system can be signicantly biased toward writing. For example, an application I/O mix might be 80 percent reads and 20 percent writes. But, if many read requests can be satised from the le system cache, the physical I/O mix might be quite differentperhaps only 60 percent reads and 40 percent writes. In fact, if there is a large amount of memory to be used as a buffer cache, the physical I/O mix can even go the other direction: 80 percent reads and 20 percent writes might turn out to be 40 percent reads and 60 percent writes.
Figure 71
RAID5 Guidelines
4 RAID5 can withstand only a single device failure.
A mirrored metadevice can withstand multiple device failures in some cases (for example, if the multiple failed devices are all on the same submirror). A RAID5 metadevice can only withstand a single device failure. Striped and concatenated metadevices cannot withstand any device failures.
4 RAID5 provides good read performance if no error conditions, and poor read performance under error conditions.
When a device fails in a RAID5 metadevice, read performance suffers because multiple I/O operations are required to regenerate the data from the data and parity on the existing drives. Mirrored metadevices do not suffer the same degradation in performance when a device fails.
Note - The man page in Solaris 2.3 and 2.4 incorrectly states that the maximum size
is 32 cylinders.)
4 If possible, set your le system cluster size equal to some integral of the stripe width.
For example, try the following parameters for sequential I/O: maxcontig = 16 (16 * 8 Kbyte blocks = 128 Kbyte clusters) Using a four-way stripe with a 32 Kbyte interlace value results in a 128 Kbyte stripe width, which is a good performance match: interlace size = 32 Kbyte (32 Kbyte stripe unit size * 4 disks = 128 Kbyte stripe width)
4 You can set the maxcontig parameter for a le system to control the le system I/O cluster size. This parameter species the maximum number of blocks, belonging to one le, that will be allocated contiguously before inserting a rotational delay.
Performance may be improved if the le system I/O cluster size is some integral of the stripe width. For example, setting the maxcontig parameter to 16 results in 128 Kbyte clusters (16 blocks * 8 Kbyte le system block size).
Conguration Guidelines 121
Note - The options to the mkfs(1M) command can be used to modify the default
minfree, inode density, cylinders/cylinder group, and maxcontig settings. You can also use the tunefs(1M) command to modify the maxcontig and minfree settings. See the man pages for mkfs(1M), tunefs(1M), and newfs(1M) for more information.
4 How does I/O for a RAID5 metadevice and a striped metadevice compare?
4
The striped metadevice performance is better than the RAID5 metadevice, but it doesnt provide data protection (redundancy). 4 RAID5 metadevice performance is lower than striped metadevice performance for write operations, because the RAID5 metadevice requires multiple I/O operations to calculate and store the parity. 4 For raw random I/O reads, the striped metadevice and the RAID5 metadevice are comparable. Both the striped metadevice and RAID5 metadevice split the data across multiple disks, and the RAID5 metadevice parity calculations arent a factor in reads except after a slice failure. 122
Solstice DiskSuite 4.2.1 Reference Guide February 2000
4 For raw random I/O writes, the striped metadevice performs better, since the
RAID5 metadevice requires multiple I/O operations to calculate and store the parity. 4 For raw sequential I/O operations, the striped metadevice performs best. The RAID5 metadevice performs lower than the striped metadevice for raw sequential writes, because of the multiple I/O operations required to calculate and store the parity for the RAID5 metadevice.
Random I/O
4 What is random I/O?
Databases and general-purpose le servers are examples of random I/O environments. In random I/O, the time spent waiting for disk seeks and rotational latency dominates I/O service time.
4 What is the general strategy for conguring for a random I/O environment?
You want all disk spindles to be busy most of the time servicing I/O requests. Random I/O requests are small (typically 2-8 Kbytes), so its not efcient to split an individual request of this kind onto multiple disk drives. The interlace size doesnt matter, because you just want to spread the data across all the disks. Any interlace value greater than the typical I/O request will do. For example, assume you have 4.2 Gbytes DBMS table space. If you stripe across four 1.05-Gbyte disk spindles, and if the I/O load is truly random and evenly dispersed across the entire range of the table space, then each of the four spindles will tend to be equally busy. The target for maximum random I/O performance on a disk is 35 percent or lower as reported by DiskSuite Tools performance monitor, or by iostat(1M). Disk use in excess of 65 percent on a typical basis is a problem. Disk use in excess of 90 percent is a major problem.
Conguration Guidelines 123
If you have a disk running at 100 percent and you stripe the data across four disks, you might expect the result to be four disks each running at 25 percent (100/4 = 25 percent). However, you will probably get all four disks running at greater than 35 percent since there wont be an articial limitation to the throughput (of 100 percent of one disk).
4 What is the general strategy for conguring for a sequential I/O environment?
You want to get greater sequential performance from an array than you can get from a single disk by setting the interlace value small relative to the size of the typical I/O request.
4 max-io-size / #-disks-in-stripe
Example: Assume a typical I/O request size of 256 Kbyte and striping across 4 spindles. A good choice for stripe unit size in this example would be: 256 Kbyte / 4 = 64 Kbyte, or smaller
Note - Seek and rotation time are practically non-existent in the sequential case.
When optimizing sequential I/O, the internal transfer rate of a disk is most important. The most useful recommendation is: max-io-size / #-disks. Note that for UFS le systems, the maxcontig parameter controls the le system cluster size, which defaults to 56 Kbyte. It may be useful to congure this to larger sizes for some sequential applications. For example, using a maxcontig value of 12 results in 96 Kbyte le system clusters (12 * 8 Kbyte blocks = 96 Kbyte clusters). Using a 4-wide stripe with a 24 Kbyte interlace size results in a 96 Kbyte stripe width (4 * 24 Kbyte = 96 Kbyte) which is a good performance match. 124
Solstice DiskSuite 4.2.1 Reference Guide February 2000
Example: In sequential applications, typical I/O size is usually large (greater than 128 Kbyte, often greater than 1 Mbyte). Assume an application with a typical I/O request size of 256 Kbyte and assume striping across 4 disk spindles. Do the arithmetic: 256 Kbyte / 4 = 64 Kbyte. So, a good choice for the interlace size would be 32 to 64 Kbyte. Number of stripes: Another way of looking at striping is to rst determine the performance requirements. For example, you may need 10.4 Mbyte/sec performance for a selected application, and each disk may deliver approximately 4 Mbyte/sec. Based on this, then determine how many disk spindles you need to stripe across: 10.4 Mbyte/sec / 4 Mbyte/sec = 2.6 Therefore, 3 disks would be needed.
Striping Trade-offs
4 Striping cannot be used to encapsulate existing le systems. 4 Striping performs well for large sequential I/O and for uneven I/O distributions. 4 Striping uses more CPU cycles than concatenation, but the trade-off is usually worth it. 4 Striping does not provide any redundancy of data.
To summarize the trade-offs: Striping delivers good performance, particularly for large sequential I/O and for uneven I/O distributions, but it does not provide any redundancy of data. Write intensive applications: Because of the read-modify-write nature of RAID5, metadevices with greater than about 20 percent writes should probably not be RAID5. If data protection is required, consider mirroring. RAID5 writes will never be as fast as mirrored writes, which in turn will never be as fast as unprotected writes. The NVRAM cache on the SPARCstorage Array closes the gap between RAID5 and mirrored congurations. Full Stripe Writes: RAID5 read performance is always good (unless the metadevice has suffered a disk failure and is operating in degraded mode), but write performance suffers because of the read-modify-write nature of RAID5. In particular, when writes are less than a full stripe width or dont align with a stripe, multiple I/Os (a read-modify-write sequence) are required. First, the old data and parity are read into buffers. Next, the parity is modied (XORs are performed between data and parity to calculate the new parityrst the old data is logically subtracted from the parity and then the new data is logically added to the parity), and the new parity and data are stored to a log. Finally, the new parity and new data are written to the data stripe units.
Conguration Guidelines 125
Full stripe width writes have the advantage of not requiring the read-modify-write sequence, and thus performance is not degraded as much. With full stripe writes, all new data stripes are XORed together to generate parity, and the new data and parity are stored to a log. Then, the new parity and new data are written to the data stripe units in a single write. Full stripe writes are used when the I/O request aligns with the stripe and the I/O size exactly matches: interlace_size * (num_of_columns - 1) For example, if a RAID5 conguration is striped over 4 columns, in any one stripe, 3 chunks are used to store data, and 1 chunk is used to store the corresponding parity. In this example, full stripe writes are used when the I/O request starts at the beginning of the stripe and the I/O size is equal to: stripe_unit_size * 3. For example, if the stripe unit size is 16 Kbyte, full stripe writes would be used for aligned I/O requests of size 48 Kbyte. Performance in degraded mode: When a slice of a RAID5 metadevice fails, the parity is used to reconstruct the data; this requires reading from every column of the RAID5 metadevice. The more slices assigned to the RAID5 metadevice, the longer read and write operations (including resyncing the RAID5 metadevice) will take when I/O maps to the failed device.
4 The larger the log size, the better the performance. Larger logs allow for greater concurrency (more simultaneous le system operations per second). 4 The absolute minimum size for a logging device is 1 Mbyte. A good average for performance is 1 Mbyte of log space for every 100 Mbyte of le system space. A recommended minimum is 1 Mbyte of log for every 1 Gbyte of le system space.
Assume you have a 4 Gbyte le system. What are the recommended log sizes? 126
Solstice DiskSuite 4.2.1 Reference Guide February 2000
Mbyte le system).
4 A recommended minimum is 4 Mbyte (1 Mbyte log/1 Gbyte le system). 4 The absolute minimum is 1 Mbyte.
4 It is strongly recommended that you mirror all logs. It is possible to lose the data in a log because of device errors. If the data in a log is lost, it can leave a le system in an inconsistent state which fsck may not be able to repair without user intervention.
4 In general, it is best to distribute state database replicas across slices, drives, and controllers, to avoid single points-of-failure. 4 Each state database replica occupies 517 Kbyte (1034 disk sectors) of disk storage by default. Replicas can be stored on: a dedicated disk partition, a partition which will be part of a metadevice, or a partition which will be part of a logging - device.
Note - Replicas cannot be stored on the root (/), swap, or /usr slices, or on slices
containing existing le systems or data.
Three or more replicas are required. You want a majority of replicas to survive a single component failure. If you lose a replica (for example, due to a device failure), it may cause problems running DiskSuite or when rebooting the system.
APPENDIX
Introduction
The rst part of this appendix, DiskSuite Tool Messages on page 130, contains the status, error, and log messages displayed by DiskSuites graphical user interface, DiskSuite Tool. The second part of this appendix, DiskSuite Command Line Messages on page 157, contains the error and log messages displayed by the command line utilities. Use the following table to locate DiskSuite Tool status, error, and log information.
4 State Information Terms on page 130 4 Metadevice Editor Messages on page 130 4 Dialog Box Error Messages on page 131 4 Dialog Box Warning Messages on page 140 4 Dialog Box Information Messages on page 146 4 Metadevice Editor Window Messages on page 147 4 Disk View Window Messages on page 152 4 Log Messages on page 153
Use the following table to locate DiskSuite command line error and log information.
129
4 OK The component is operating properly. 4 Resyncing The component is in the process of resyncing (copying) the data. 4 Erred The slice has encountered an I/O error or an open error. All reads and writes to and from this slice have been discontinued. See the Solstice DiskSuite 4.2.1 Users Guide for information on slice replacement. 4 Last erred The slice has encountered an I/O error or an open error. However, the data is not replicated elsewhere due to another slice failure. I/O is still performed on the slice. If I/O errors result, the mirror or RAID5 I/O will fail. See the Solstice DiskSuite 4.2.1 Users Guide for information on slice replacement.
4 When you are pointing the cursor at an object, the message line has the following format:
4 When you are dragging an object that is not yet populated, the message line has the form:
Drop requirement comp_type into new_object_type new_object_name
Once the object is sufciently populated, the message line has the form:
Drop comp_type into new_object_type new_object_name or commit
130
You have tried to commit two separate changes to a RAID device at the same time. While the changes may be valid, only one can be performed at a time. For example, if you replace a slice and add a new slice, this message is displayed. You must perform one change and click on the Commit button, then perform the other change and click on the Commit button.
Concat dn has no stripes
You have tried to commit a concatenation that has no stripes. You must add stripes to the concatenation.
You cannot delete a metadevice that is in use.
You have tried to delete a metadevice that contains a mounted le system, is being swapped on, or is open as a raw device.
dn has no components.
You have tried to commit a Concat/Stripe template that has no slices. You must add slices to the object before clicking on the Commit button.
Mirror dn has no submirrors
You have tried to commit a mirror that has no submirrors. You must add submirrors before clicking on Commit.
RAID dn must have at least three slices.
You have tried to commit a RAID metadevice that has fewer than three slices. Add the necessary slices and commit the RAID metadevice.
Slices added to a RAID device must be at least as large as the smallest original slice.
You have tried to add a slice to a RAID device that is smaller than the slices that are already part of the RAID device.
131
Slice slice is mounted. You cannot add it to Concat/Stripe dn, it is not the first mounted slice to be added.
You have tried to add a slice that has a mounted le system to a Concat/Stripe and there is already at least one slice in the Concat/Stripe. The slice with the mounted le system must be the rst one added to the Concat/Stripe.
Slice slice is mounted. You cannot add it to dn, it already has a mounted slice.
You have tried to add a slice that contains a mounted le system to a Concat/Stripe template. The slice that contains the mounted le system must be the rst slice added.
Slice slice is mounted. You cannot add a mounted slice to a RAID device, doing so would corrupt the file system.
You have tried to add a slice that contains a mounted le system to a RAID template. Choose another slice that does not contain any data.
Slice slice is too small to be used in a RAID device.
You have tried to add a slice that is too small. The slice being added is either smaller than the slices already in the RAID device or is too small to be used in a RAID device.
Submirror dn has a mounted file system, it should be the first submirror added.
You have tried to add a submirror that contains a mounted le system to an existing mirror. To mirror this le system, create a one-way mirror using this submirror then attach another submirror that does not contain data.
You have tried to add a new submirror that is smaller than the current size of the Mirror. The submirror must be as large as the existing submirror.
132
Mirror dn has a component with a file system mounted. You cannot add another submirror.
You have tried to add a submirror that contains a mounted le system and the Mirror already has a mounted le system on the other submirror. You must add an unassigned slice.
The root file system may not be mounted on a concat with more than one stripe.
You have tried to drop the slice that contains the root le system into a Concat/ Stripe template. Remove one of the existing stripes.
You have tried to drop the slice that contains the root le system into a Trans device template. The root le system cannot be placed in a Trans device.
You have tried to commit a Trans device that has no master device. Add the master device and commit the device.
You have tried to add a slice or hot spare pool to a RAID device that has been committed and is initializing. Wait until the device is initialized.
You have tried to add a slice to a RAID device that has been committed and is initializing. Wait until the device is initialized.
133
The value you entered value is too large. You should use a value less than new value, which is the maximum possible device size.
You tried to enter an unacceptably large value in one of the Slice Filter windows size elds.
Your attempt to change the name of Hot Spare Pool hspnnn to hspnnn failed for the following reason:
You tried to change the name of a hot spare pool to a name that already exists, or the name is not valid.
RAID component component is not the same size as component component. Extra space on the larger component will be wasted.
You tried to add a slice to a RAID5 metadevice that is not the same size as the existing slices in the RAID device.
You cannot change the hot spare pool for a RAID device while it is initializing.
You tried to change the current hot spare pool for a RAID5 metadevice during its initialization. Wait for the initialization to complete before attempting to change the hot spare pool again.
134
The RAID device has failed to initialize. It cannot be repaired and should be deleted.
There was an error when trying to initialize the RAID5 metadevice. The only recourse left is to delete the device and recreate it, after repairing any errored slices.
A slice in a created stripe may not be replaced unless the stripe is part of a submirror with redundancy.
A slice in a stripe may not be enabled unless the stripe is part of a submirror with redundancy.
The metadevice state database has not been committed since slice slice was added. You cannot restore replicas on the slice.
You need to commit the MetaDB object before enabling any broken slices.
There is no device with a mounted file system which matches the path name path.
You tried to drag a le name from Storage Manager to the Metadevice Editor canvas and DiskSuite Tool could not locate the device containing the le system.
135
Disk Set Released host no longer owns the setname disk set. setname cannot continue; you must exit.
Disk Set Changed An external change has occurred to the setname disk set.
The above three messages indicate that changes were made to the diskset from the command line while DiskSuite Tool was running on that diskset.
You tried to display the Device Statistics window for a controller, tray, or slice. The Device Statistics window is only available for metadevices or disks.
Sync NVRAM is only available for SPARCstorage Array controllers, trays and disks with working batteries.
You tried to sync NVRAM on a non-SPARCstorage Array device, or you tried to sync NVRAM on a SPARCstorage Array whose battery has failed.
136
Fast Write is only available for SPARCstorage Array controllers, trays and disks with working batteries.
You tried to enable Fast Write on a SPARCstorage Array device whose battery has failed.
Reserve Disks is only available for SPARCstorage Array controllers, trays and disks.
Release Disks is only available for SPARCstorage Array controllers, trays and disks.
The lock for setname could not be acquired. Either another instance of DiskSuite Tool or the command line currently has the lock.
You entered an invalid interlace value for the striped metadevice or RAID5 metadevice.
137
You cannot rename a metadevice that is in-use, nor can you rename a metadevice to a name that already exists.
Metadevice name not in range dn - dn.
You tried to give a name to a metadevice outside the current dened range. If necessary, increase the value of nmd in the /kernel/drv/md.conf le.
The hot spare pool name not in the range hsp000 - hsp999.
Hot spare pools must be named hspnnn, where nnn is a number between 000 and 999.
The hot spare pool hspnnn already exists.
You tried to create a hot spare pool with an existing hot spare pools name.
You cannot delete a mounted trans device that has an attached logging device.
The mirror or trans metadevice you are trying to delete is currently in-use. The deleted device name will be switched with one of its subdevices names. In the case of a mirror, the mirror name is switched with one of its submirror names. In the case of a trans metadevice, the trans name and the master device name are switched.
You cannot delete a mounted mirror with more than one submirror.
138
You cannot delete a mounted trans device whose master is not a metadevice.
You are attempting to delete a mounted trans device whose master device is a slice. Unmount the trans metadevice to be able to delete it.
Cannot purge the NVRAM for device. Disk is reserved by another host. Cannot sync the NVRAM for device. Disk is reserved by another host. Cannot reserve device. Disk is reserved by another host. Cannot release device. Disk is reserved by another host. Cannot start device. Disk is reserved by another host. Cannot stop device. Disk is reserved by another host. Cannot disable fast write for device. Disk is reserved by another host. Cannot enable fast write for device. Disk is reserved by another host. Cannot enable fast write for synchronous writes for device. Disk is reserved by another host.
The above messages indicate that another host has a reservation on device. To perform the desired action, rst release the reservation.
You cannot detach an existing submirror while a replace, enable or attach is pending.
You tried to detach a submirror that is currently be replaced, enabled, or attached. Wait for the operation to complete before attempting the detach again.
139
You tried to enable a slice in a submirror while the mirror is being resynced. Wait for the operation to complete before attempting the enable again.
You have populated the MetaDB template with slices that are all attached to the same controller. If the controller fails, you will not have access to any of the metadevices.
The new Concat/Stripe device dn has a slice with a mounted file system. If an entry for its file system exists in /etc/vfstab it will be updated when the Concat/Stripe is committed so that the next mount of the file system will use the new device. The system must be rebooted for this device mount to take effect.
You have tried to add a slice that contains a mounted le system to a Concat/Stripe template. The slice that contains the mounted le system must be the rst slice added. You cannot add a mounted le system to a RAID device.
Metadevice device_type dn will be deleted. Data could be lost. Really delete it?
This message displays when you attempt to delete any committed metadevice. You should continue only if you are sure your data is protected.
140
Stripe component dn is not the same size as component dn. Extra space on the larger component will be wasted.
You have tried to add slices to a Concat/Stripe (stripe) that are a different size than the slices already in the stripe. Adding slices of a different sizes to a stripe causes wasted space.
Slice dn is on the same controller as slice dn.It is not advisable to have slices from multiple submirrors on the same controller.
You have tried to create a Mirror with submirrors that are made up of slices attached to the same controller. If the controller fails, the mirror will not protect against lost data.
Slice dn is on the same disk as slice dn.It is not advisable to have slices from multiple submirrors on the same disk.
You have tried to create a Mirror with submirrors that are made up of slices from the same disk. If the disk fails, the mirror will not protect against lost data.
Submirror dn is not the same size as submirror dn. Extra space on the larger submirror will be wasted.
You have tried to create a Mirror that has differently sized submirrors. The extra space on the larger submirror cannot be used.
141
Submirror dn has an erred component. Its data will not be valid after it is detached.
You have tried to detach or ofine a submirror that has a slice reporting errors.
The file system mounted on metadevice dn has been unmounted since the last status update.
You have tried to delete a metadevice that was unmounted. The device does not display the unmounted information. Select Rescan Conguration from the Metadevice Editor windows File menu to update this information.
The following components are in the erred state: dn You may not replace RAID component dn until they are fixed.
You are replacing a component of a RAID metadevice that has reported errors (in the last-errored state). This cannot be performed if there are any other components in the RAID metadevice that have reported errors.
The following components are in the erred state: dn The data for the component replacing RAID component dn may be compromised.
You are replacing or enabling a RAID component that has reported errors. This action is dangerous if there is another component that has reported errors (in the last-errored state). The data on the new component may not be completely accurate.
142
The following components are in the last_erred state: dn The data for RAID component dn may be compromised.
You are replacing or enabling a RAID component that is reporting errors. This action is dangerous if there is another component that has reported errors (in the last-errored state). The data on the new component may not be completely accurate.
The following components have erred: dn The data for RAID component dn WILL NOT BE RESYNCED.
You have tried to replace a component in a RAID metadevice and there are two or more components reporting errors. It is not possible to replace the components because there is no way to recreate the data. If you proceed with the replacement, you must obtain the data from a backup copy.
The format of disk dn has changed. You must restart metatool to incorporate the changes.
You have reformatted a disk that used to have a metadevice, le system, or database replica and selected the Rescan Conguration option from the Metadevice Editor windows File menu. If the disk is not being used, the new information is read by DiskSuite and displayed in the appropriate windows (for example, Slice Browser and Disk View).
The log device for Trans dn cannot be detached until the Trans is unmounted or the system is rebooted.
You have tried to detach a log and commit the Trans object. The detach will not be performed as long as the log master is mounted. The Trans device is actually in a detach pending state.
143
The master device dn for Trans dn has a mounted file system. In order for logging of this file system to be activated, the file /etc/vfstab must be updated with the new device name and the system rebooted. Committing Trans dn will update /etc/vfstab automatically if an entry exists for the file system.
You have tried to add a metadevice that has a mounted le system to a Trans master. DiskSuite will automatically change the entry for the le system in the /etc/vfstab le. If an entry for the le system does not exist in the /etc/vfstab le, you must create one. The message also tells you to reboot the system.
The master device dn for Trans dn has a mounted file system. If an entry for its file system exists in /etc/vfstab, it will be updated with the new device to mount for the file system. The system must be rebooted for this device mount to take effect.
You have tried to add a master device that has a mounted le system to a Trans. DiskSuite will automatically change the entry for the le system in the /etc/vfstab le. If an entry for the le system does not exist in the /etc/vfstab le, you must create one. The message also tells you to reboot the system.
The metadevice dn has been removed as a swap device since the last status update.
You have tried to delete a device that is the swap device. The device still says it is swap. To update the devices status, select Rescan Conguration from the Metadevice Editor windows File menu.
144
The new Mirror device dn has a submirror with a mounted file system. If an entry for its file system exists in /etc/vfstab, it will be updated with the new device to mount for the file system. The system must be rebooted for this device mount to take effect.
You have tried to add a Concat/Stripe that has a mounted le system to a Mirror. DiskSuite will automatically change the entry for the le system in the /etc/vfstab le. If an entry for the le system does not exist in the /etc/vfstab le, you must create one. The message also tells you to reboot the system.
The state database will have no replicas. If the system reboots, all metadevices will be corrupted.
You have tried to remove the state database and all replicas from the MetaDB template. If you commit, you will not have access to any metadevices after the next reboot.
The submirror dn has a slice with a mounted file system. In order for mirroring of this file system to be activated, the file /etc/vfstab must be updated with the new device name and the system rebooted. Committing Mirror dn will update /etc/vfstab automatically if an entry exists for the file system.
You have tried to add a submirror that has a mounted le system to a Mirror. DiskSuite will automatically change the entry for the le system in the /etc/vfstab le. If an entry for the le system does not exist in the /etc/vfstab le, you must create one. The message also tells you to reboot the system.
145
This log is not mirrored. It is recommended that you mirror logs whenever possible to avoid single points of failure.
You have tried to create a Trans device with a log that is not mirrored. If the log is not mirrored, the data could be lost or unavailable.
You have tried to commit a Trans device that has no Trans log. You should add the log before committing the device. Until you add the log, the logging feature is disabled.
You have tried to add slices to a Concat/Stripe metadevice. Following a commit, you can expand the le system, as documented in the Solstice DiskSuite 4.2.1 Users Guide.
146
Statistic sheets are not available for the Metastate database (metadb), Hot Spare Pools or slices.
You cannot display a Device Statistics window for the metadevice state database, hot spare pools, or slices.
You are pointing at any of the ve Template icons. object is either Trans, RAID, Mirror, Concat/Stripe, or Hot Spare Pool.
You are pointing at an object (component_type) on the canvas. The component_type is either Trans, RAID, Mirror, Concat/Stripe, or Hot Spare. The metadevice name is reported as dn, where the default size for n is a number in the range 0 to 127. The size is the capacity of the metadevice (for example, 500 Mbytes). The use is either Unassigned, Submirror, or /lesystem. The status is reported as OK, Attention, Urgent, or Critical.
You are pointing at an empty canvas or the device list in the Metadevice Editor window.
hspnnn: status=status
147
You are pointing at a Hot Spare Pool on the canvas. The Hot Spare Pool name is reported as hspnnn, where nnn is a number in the range 000 to 999. The status is reported as OK, Attention, Urgent, or Critical.
You are pointing at a disk slice on the canvas. The name of the slice appears in the format, cntndnsn. The size is the capacity of the slice (for example, 5 Mbytes). The use is either Unassigned or Component. The status is reported as OK, Attention, Urgent, or Critical.
Use Button2 to pan the viewport over the work area ...
You are pointing at the Panner. By pressing the middle button and moving the cursor, you move the canvas to a new view area.
You are dragging a Hot Spare Pool over a concatenation. This message is telling you that the Concat/Stripe must be part of a Mirror or the Hot Spare Pool you are dropping will not work.
You are dragging a Concat/Stripe over the specied Mirror. If you drop the Concat/ Stripe, it will become part of that Mirror.
You are dragging a Concat/Stripe over the specied submirror. Drop the Concat/ Stripe inside the rectangle that contains the submirror to make the replacement.
148
You are dragging a Hot Spare Pool over the specied concatenation. By dropping the Hot Spare Pool into the Concat/Stripe, it becomes associated with that concatenation.
You are dragging a Hot Spare Pool over the specied RAID device. If you drop the Hot Spare Pool, it will become associated with the RAID device.
You are dragging a metadevice or slice over a Trans device. If you drop the metadevice or slice, it will become part of the Trans device.
You are dragging a metadevice or slice over the Master of a Trans device. Drop the object into the Master to add it to the device.
You are dragging a slice over the specied Hot Spare Pool. Drop the slice to add it to the Hot Spare Pool.
You are dragging an unused slice over the specied RAID device. If you drop the slice, it will become part of the RAID device.
You are dragging a slice either over a committed RAID device or over a submirror that has more than one submirror. You can drop the new slice on the existing slice to make a replacement.
149
You are dragging an unused slice over a Concat/Stripe, RAID, or Trans device. To replace the slice you are over, release the middle button and drop the slice.
You are dragging an unused slice over a Concat/Stripe that has one or more slices. You can populate the Concat/Stripe with additional slices or select the Concat/Stripe (stripe) and execute a commit.
Drop a slice to add new replicas; you should have at least three replicas.
You are dragging a slice over the MetaDB object. Drop the slice to create another replica. DiskSuite requires the conguration have a minimum of three slices in the MetaDB object.
You are dragging a Concat/Stripe over the specied Mirror. You must drop a minimum of one Concat/Stripe into the specied Mirror.
You are dragging an unused slice over a Concat/Stripe that has zero slices. You must populate the Concat/Stripe (stripe) with a minimum of one slice.
You are dragging an unused slice over the specied RAID device. You must drop a minimum of three slices into the RAID device.
You are dragging a slice over the MetaDB object. Drop the slice to create another replica.
150
You cannot add more concatenations; mirror dn already has three submirrors
You are dragging a Concat/Stripe over the specied Mirror. You cannot add another Concat/Stripe (submirror) to a Mirror that already has three submirrors.
You are dragging an unused slice over a committed Concat/Stripe. DiskSuite does not permit you to add slices to a committed Concat/Stripe (stripe).
You are dragging an unused slice over a slice that is in use in a committed Hot Spare Pool. You cannot drop the new slice on a slice in a Hot Spare Pool that is currently in use.
You are dragging an object over the specied RAID device. Because the device is committed, you cannot make replacements.
You are dragging a slice over a a committed Concat/Stripe. You cannot make changes to this metadevice, unless it is part of a submirror.
You cannot replace submirror dn when mirror dn has only one submirror
You are dragging a submirror over the specied submirror. You cannot drop the submirror into the Mirror when there is only one submirror present.
151
You are dragging an unused slice over a slice that is in use as a Trans master or log. You cannot replace slices in a committed Trans device.
4 When you are pointing the cursor at an object, the message line has the following format:
If you are pointing at a disk or slice that has a status problem, the message has the form:
4 When you are pointing the cursor at an empty portion of the canvas, the following message displays:
Drop object onto color drop sites to show mappings
You can select a disk or slice and drag it to the color map at the bottom of the Disk View window. On a color monitor, you have four colors available as drop sites. On a monochrome monitor, you have one color drop site.
You are pointing at a disk slice on the canvas. The name of the slice appears in the format, cntndnsn. The size is the capacity of the slice (for example, 5 Mbytes). The use is either Unassigned, Component, Hot Spare, MetaDB Replica, Reserve, mount_point, swap, Trans Log, or Overlap. The status is reported as OK, Attention, Urgent, or Critical.
152
You are dragging an object from the Disk View window. You can drop the slices inside an object or on the canvas of the Metadevice Editor window.
Log Messages
Log messages are those passed by syslog(3) to the syslogd(1M). These messages are appended to a le and written to the console window. These messages will not appear in any DiskSuite error or problem list. The log messages are divided into the following categories:
4 dev is a device name. 4 dnum is a metadevice name. 4 num is a number. 4 state is a Trans device state. 4 trans is either logging or master.
Note - When the initial portion of a message begins with a variable, the message is
alphabetized by the rst word following the variable.
The named misc module is not loadable. It is possibly missing, or something else has been copied over it.
153
The set command in /etc/system for the mddb.bootlist<number> is not in the correct format. Run metadb p to place the correct set commands into the /etc/system le.
The rst device name listed has been hot spare replaced with the second device name listed.
The rst device number listed has been hot spare replaced with the second device number listed.
This error occurs when the number of metadevices as specied by the nmd parameter in the /kernel/drv/md.conf le is lower than the number of congured metadevices on the system. It can also occur if the md_nsets parameter for disksets is lower than the number of congured disksets on the system. To x this problem, examine the md.conf le and increase the value of either nmd or md_nsets as needed.
The underlying named driver module is not loadable (for example, sd, id, or a third-party driver). This could indicate that the driver module has been removed.
The named hot spare cannot be opened, or the underlying driver is not loadable.
Solstice DiskSuite 4.2.1 Reference Guide February 2000
154
A read or write error has occurred on the specied metadevice at the specied device name. This happens if any read or write errors occur on a metadevice.
dnum: read error on dev(num, num) dnum: write error on dev(num, num)
A read or write error has occurred on the specied metadevice at the specied device number. This happens if any read or write errors occur on a metadevice.
dnum: read error on dnum dnum: write error on dnum
A read or write error has occurred on the specied metadevice at the specied device number. This happens if any read or write errors occur on a metadevice.
State database commit failed State database delete failed
These messages occur when there have been device errors on components where the state database replicas reside. These errors only occur when more than half of the replicas have had errors returned to them. For example, if you have three components with state database replicas and two of the components report errors, these errors may occur. The state database commit or delete is retried periodically. If a replica is added, the commit or delete will nish and the system will be operational. Otherwise, the system will time out and panic.
This message occurs when there are not enough usable replicas for the state database to be able to update records in the database. All accesses to the metadevice driver will fail. To x this problem, add more replicas or delete inaccessible replicas.
155
trans device: read error on dnum trans device: write error on dnum
A read or write error has occurred on the specied logging or master device at the specied metadevice. This happens if any read or write errors occur on a logging or master device.
trans device: read error on dev trans device: write error on dev
A read or write error has occurred on the specied logging or master device at the specied device name. This happens if any read or write errors occur on a logging or master device.
trans device: read error on dev(num, num) trans device: write error on dev(num, num)
A read or write error has occurred on the specied logging or master device at the specied device number. This happens if any read or write errors occur on a logging or master device.
logging device: dnum changed state to state logging device: dev changed state to state logging device: dev(num, num) changed state to state
The logging device and its associated master device(s) have changed to the specied state(s).
A failed metadevice state database commit or deletion has been retried the default 100 times.
156
where:
4 program name: is the name of the application name and version being used (for example, DiskSuite 4.2.1). 4 host: is the host name of the machine on which the error occurred (for example, blue). 4 [optional1]: is an optional eld containing contextual information for the specic error displayed for example. mountpoint or which daemon returned the error). 4 name: is the command name which generated the error message (for example, metainit). 4 [optional2]: is a second optional eld containing additional contextual information for the specic error displayed 4 error message... is the error message itself (as listed in this appendix).
For the purpose of this appendix, only the nal portion (error message...) of each error message is listed. The log messages listed near the back of this appendix are divided into three categories:
157
Error Messages
The command line error messages displayed by DiskSuite are listed in alphabetical order below. The message is preceded by some or all of the variables described in the previous section. Other variables included in these messages indicate the following:
4 nodename is the name of a specic host. 4 drivename is the name of a specic drive. 4 metadevice is the number of a specic metadevice device or hot spare pool. 4 setname is the name of a specic diskset. 4 num is a number.
add or replace failed, hot spare is already in use
The hot spare that is being added or replaced is already in the hot spare pool.
administrator host nodename cant be deleted, other hosts still in set. Use -f to override
The host which owns the diskset cannot be deleted from the diskset without using the f option to override this restriction. When the f option is used, all knowledge of the diskset is removed from the local host. Other hosts within the diskset are unaware of this change.
administrator host nodename deletion disallowed in one host admin mode
The administrator host is the host which has executed the command. This host cannot be deleted from the diskset if one or more host in the diskset are unreachable.
An attempt was made to detach the last submirror. The operation would result in an unusable mirror. DiskSuite does not allow a metadetach to be performed on the last submirror.
Solstice DiskSuite 4.2.1 Reference Guide February 2000
158
An attempt was made to take a submirror ofine or detach a submirror that contains the data. The other submirrors have erred components. If this operation were allowed, the mirror would be unusable.
An attempt was made to take a submirror ofine that is not in the OKAY state or to online a submirror that is not in the ofined state. Use the f option if you really need to ofine a submirror that is in a state other than OKAY.
The user attempted to use the metaclear command on a metamirror that contained submirrors that werent in the OKAY state (Needs maintenance state). If the metamirror must be cleared, the submirrors must also be cleared. Use r (recursive) to clear all the submirrors, or use f (force) to clear a metamirror containing submirrors in the Needs maintenance state.
cant attach labeled submirror to unlabeled mirror
An attempt was made to attach a labeled submirror to an unlabeled mirror. A labeled metadevice is a device whose rst component starts at cylinder 0. To prevent the submirrors label from being corrupted, DiskSuite does not allow labeled submirrors to be attached to unlabeled mirrors.
An attempt was made to replace or enable a component that did not exist in the specied metadevice.
An attempt was made to either metaonline(1M), metaoffline(1M), or metadetach(1M) the submirror, dnum. The submirror is not currently attached to the specied metamirror causing the command to fail.
159
An attempt was made to use the device dev in a new metadevice and it already is used in the metadevice dnum.
cant include device dev, it overlaps with a device in dnum
The user has attempted to use device dev in a new metadevice which overlaps an underlying device in the metadevice, dnum.
cannot delete the last database replica in the diskset
An attempt was made to delete the last database replica in a diskset. To remove all database replicas from a diskset, delete all drives from the diskset.
cannot enable hotspared device
An attempt was made to perform a metareplace e (enable) on an underlying device which is currently hot spared. Try enabling the hot spare component instead.
An attempt was made to modify the associated hot spare pool of a submirror, but the submirror is currently using a hot spare contained within the pool.
The /etc/lvm/mddb.cf le has probably been corrupted or user-edited. The checksum this le contains is currently invalid. To remedy this situation: delete the mddb.cf le, delete a database replica, and add back the database replica.
component in invalid state to replace \ - replace Maintenance components first
An attempt was made to replace a component that contains the only copy of the data. The other submirrors have erred components. If this operation were allowed, the mirror would be unusable.
After a replica of the state database is rst created, it is read to make sure it was created correctly. If the data read does not equal the data written this message is returned. This results from unreported device errors.
Solstice DiskSuite 4.2.1 Reference Guide February 2000
160
An attempt was made to use a component for a shared metadevice or shared hot spare pool whose drive is not contained within the diskset.
device in shared set
An attempt was made to use a component for a local metadevice or local hot spare pool whose drive is contained within the diskset. The drives in the local diskset are all those which are not in any shared disksets.
device is too small
A component (dev) in stripe num is smaller than the interlace size specied with the i ag in the md.tab le.
device size num is too small for metadevice database replica
An attempt was made to put a database replica on a partition that is not large enough to contain it.
devices were not RAIDed previously or are specified in the wrong order
An attempt was made to metainit a RAID device using the k option. Either some of the devices were not a part of this RAID device, or the devices were specied in a different order than they were originally specied.
drive drivename is in set setname
An attempt was made to add the drive drivename to a diskset which is already contained in the diskset setname.
drive drivename is in use
An attempt was made to add the drive drivename to a diskset, however a slice on the drive is in use.
drive drivename is not common with host nodename
An attempt was made to add the drive drivename to a diskset, however, the device name or device number is not identical on the local host and the specied nodename; or the drive is not physically connected to both hosts.
drive drivename is not in set
An attempt was made to delete the drive drivename from a diskset and the diskset does contain the specied drive.
161
The same drive (drivename) was specied more than once in the command line.
driver version mismatch
The utilities and the drivers are from different versions of the DiskSuite package. It is possible that either the last DiskSuite package added did not get fully installed (try running pkgchk(1M)), or the system on which DiskSuite was recently installed has not been rebooted since the installation.
failed to take ownership of a majority of the drives
Reservation of a majority of the drives was unsuccessful. It is possible that more than one host was concurrently attempting to take ownership of the same diskset. One host will succeed, and the other will receive this message.
The attempted growth of a submirror has been delayed until a mirror resync nishes. The metamirror will be grown automatically upon completion of the resync operation.
has a metadevice database replica
An attempt was made to use a component (i.e., for a hot spare) which contains a database replica.
host nodename already has a set numbered setnumber
An attempt was made to add a host nodename to a diskset which has a conicting setnumber. Either create a new diskset with both hosts in the diskset, or delete one of the conicting disksets.
host nodename already has set
An attempt was made to add a host nodename to a diskset which has a different diskset using the same name. Delete one of the disksets and recreate the diskset using a different name.
host nodename does not have set
An attempt was made to delete a host or drive from a set, but the host nodename has an inconsistent view of the diskset. This host should probably be forcibly (f) deleted.
host nodename is already in the set
An attempt was made to add a host nodename which already exists within the diskset.
host nodename is modifying set - try later or restart rpc.metad
162
Either an attempt was made to perform an operation on a diskset at the same time as someone else, or a previous operation dropped core and the rpc.metad daemon should be restarted on host nodename.
host nodename is not in the set
An attempt was made to delete the host nodename from a diskset which does not contain the host.
host nodename is specified more than once
The same host (nodename) was specied more than once in the command line.
host name nodename is too long
The name used for the host nodename is longer than DiskSuite accepts.
hotspare doesnt exist
An attempt was made to perform an operation on the hot spare dev and the specied hot spare does not exist.
hotspare in use
An attempt was made to perform an operation on the hot spare dev and the specied hot spare is in use.
An attempt was made to enable a hot spare that is not in the broken state.
An attempt to create a hot spare record in the metadevice state database failed. Run metadb i to determine the cause of the failure.
An attempt to create a hot spare pool record in the metadevice state database failed. Run metadb i to determine the cause of the failure.
An attempt was made to delete the hot spare pool hspnnn before removing all the hot spares associated with the specied hot spare pool.
163
An attempt was made to delete the hot spare pool, hspnnn, that is associated with a metadevice.
An attempt was made to metaclear(1M) a hotspare pool without rst removing its association with metadevices.
hotspare pool is already setup
An attempt was made to create a hot spare pool which already exists.
illegal option
An attempt was made to use an option which is not valid in the context of the specied metadevice or command.
in Last Erred state, errored components must be replaced
An attempt was made to replace or enable a component of a mirror in the Last Erred state when other components are in the Erred state. You must rst replace or enable all of the components in the Erred state.
invalid RAID configuration
An invalid RAID device conguration entry was supplied to metainit, either from the command line or via the md.tab le.
invalid argument
An attempt was made to use an argument which is not valid in the context of the specied metadevice or command.
invalid column count
An invalid RAID conguration entry was supplied to metainit, either from the command line or via the md.tab le. Specically, an invalid argument was provided with the o option.
invalid interlace
An unsupported interlace value follows the i option on a metadevice conguration line. The i species the interlace size. The interlace size is a number (8, 16, 32) followed by either k for kilobytes, m for megabytes, or b for blocks. The units can be either uppercase or lowercase. This message will also appear if the interlace size specied is greater than 100 Mbytes. 164
Solstice DiskSuite 4.2.1 Reference Guide February 2000
An invalid mirror conguration entry was supplied to metainit, either from the command line or via the md.tab le.
invalid pass number
An attempt was made to use a pass number for a mirror that is not within the 0 - 9 range.
invalid stripe configuration
An invalid stripe conguration entry was supplied to metainit, either from the command line or via the md.tab le.
invalid trans configuration
An invalid trans conguration entry was supplied to metainit, either from the command line or via the md.tab le.
invalid write option
An attempt was made to change the write option on a mirror using an invalid option. The legal strings are serial and parallel.
The metadevice conguration entry in the md.tab le has a h hspnnn and a metainit has not been performed on the hot spare pool.
invalid read option
The user has specied both the r and g options on the same metamirror.
invalid unit
The metadevice (submirror) passed to metattach is already a submirror. The metadevice may already be a submirror for another metamirror.
is a metadevice
The device dev being used is a metadevice and it should be a physical component.
is mounted on
The device dev in the metadevice conguration has a le system mounted on it.
165
An attempt was made to add a host to a diskset without using the nodename found in the /etc/nodename le.
is swapped on
The device in the metadevice conguration is currently being used as a swap device.
maximum number of nodenames exceeded
An attempt was made to add more nodenames than DiskSuite allows in a diskset.
maxtransfer is too small
An attempt was made to add a component to a RAID device whose maxtransfer is smaller than the other components in the RAID device.
metadevice in use
An attempt was made to metaclear(1M) a submirror without rst running metaclear on the metamirror in which it is contained.
metadevice is open
The metadevice (submirror) passed to metattach is already open (in use) as a metadevice.
An attempt was made to add more databases (num1) than the maximum allowed (num2).
metadevice database has too few replicas, cant create new records
An attempt to create a metadevice record in the metadevice state database failed. Run metadb a to add more database replicas.
An attempt to create a metadevice record in the metadevice state database failed. Run metadb a (and s) to add larger database replicas. Then delete the smaller replicas.
Solstice DiskSuite 4.2.1 Reference Guide February 2000
166
An attempt was made to use a component (that is, for a hot spare) which contains a database replica.
metadevice is temporarily too busy for renames
An attempt was made to rename a metadevice that is open. An open metadevice is either mounted on, swapped on, or being used as the raw device by an application or database. To rename the metadevice, rst make sure it is not open. This error can also appear if the f option is not used when switching trans metadevice members, or when trying to switch trans metadevice members with a logging device still attached.
mirror has maximum number of submirrors
An attempt was made to attach more than the supported number of submirrors. The maximum supported number of submirrors is three.
must be owner of the set for this command
An attempt was made to perform an operation on a diskset or a shared metadevice on a host which is not the owner of the diskset.
must have at least 2 databases (-f overrides)
An attempt was made to delete database replicas, reducing the number of database replicas to a number less than two. To override this restriction, use the f option.
must replace errored component first
An attempt was made to replace or enable a component of a mirror in the Last Erred state when other components are in the Erred state. You must rst replace or enable all of the components in the Erred state.
no available set numbers
An metahs operation was attempted using the all argument when no hot spare pools meet the criteria for the operation.
no metadevice database replica on device
An attempt was made to perform an operation on a diskset or a shared metadevice using a non-existent set name.
nodename of host nodename creating the set must be included
167
An attempt was made to create a diskset on the local host without adding the name of the local host to the diskset.
not a disk device
The component name specied is not a disk device name. For example, a CD-ROM device doesnt have the characteristics of a disk device.
not enough components specified
An invalid stripe conguration entry was supplied to metainit, either from the command line or via the md.tab le.
not enough stripes specified
Invalid stripe conguration entry was supplied to metainit, either from the command line or via the md.tab le.
not enough submirrors specified
Invalid mirror conguration entry was supplied to metainit, either from the command line or via the md.tab le.
not in local set
An attempt was made to create a local metadevice or local hot spare pool with a component whose drive is contained in a shared diskset.
not a metadevice
An attempt was made to add a database replica for a shared diskset on a component other than Slice 7.
only the current owner nodename may operate on this set
An attempt was made to perform an operation on a diskset or a shared metadevice on a host which is not the owner of the diskset.
only valid action is metaclear
The initialization of a RAID device has failed. Use the metaclear command to clear the RAID device.
168
An operation was attempted on a component or submirror that contains the only copy of the data. The other submirrors have erred components. If this operation were allowed, the mirror would be unusable.
operation requires -f (force) flag
Due to the components within the RAID device being in the Maintenance or Last Erred state, the force ag (f) is required to complete the operation.
overlaps with device in metadevice
An attempt to use metareplace failed because the new component is too small to replace the old component.
reserved by another host
The mirror operation failed because a resync is being performed on the specied metamirror. Retry this operation when the resync is nished.
rpc.metad: permission denied
The user does not have permission to run a remote process on the other systems in the diskset. The remote access permissions need to be set up.
set setname is out of date - cleaning up - take failed
The diskset setname is out of data with respect to the other hosts view. This error should occur only after one-host administration.
set lock failed - bad key
Either another DiskSuite command is running and has locked the diskset or a DiskSuite command has aborted without unlocking the diskset on one of the hosts in the diskset. Check to see if there are other DiskSuite commands running on the hosts in the diskset. Check all the hosts and allow other commands to complete on all hosts before retrying the failed command. If the error message appears when no other DiskSuite commands are running, kill and restart rpc.metad on all the hosts in the diskset. Make sure rpc.metad is running on all the hosts before trying the command again.
set name contains invalid characters
169
The diskset name selected is already in use on host nodename or contains characters not considered valid in a diskset name.
set name is too long
An attempt was made to create a diskset using more characters in the diskset name than DiskSuite will accept.
set unlock failed - bad key
The diskset is locked and the user does not have the key. Either another DiskSuite command is running and has locked the diskset or a DiskSuite command has aborted without unlocking the diskset on one of the hosts in the diskset. Check to see if there are other DiskSuite commands running on the hosts in the diskset. Check all the hosts and allow other commands to complete before retrying the failed command. If the error message appears when no other DiskSuite commands are running, kill and restart rpc.metad on all the hosts in the diskset. Make sure rpc.metad is running on all the hosts before trying the command again.
side information missing for host nodename
The diskset is incomplete. Kill rpc.metad on all hosts and then retry the operation.
slice 7 is not usable as a metadevice component
An attempt was made to use Slice 7 in a shared metadevice or shared hot spare pool. Slice 7 is reserved for database replicas only.
submirror too small to attach
The metadevice passed to metattach is smaller than the metamirror to which it is being attached.
stale databases
The user attempted to modify the conguration of a metadevice when at least half the metadevice state database replicas were not accessible.
syntax error
An invalid metadevice conguration entry was provided to metainit from the command line or via the md.tab le.
target metadevice is not able to be renamed
An attempt was made to switch metadevices that do not have a child-parent relationship. For example, you cannot rename a trans metadevice with a stripe that is part of a mirrored master device for the trans.
170
To create any metadevices or hot spare pools, database replicas must exist. See metadb(1M) for information on the creation of database replicas.
unable to delete set, it still has drives
An attempt was made to delete the last remaining host from a diskset while drives still exist in the diskset.
The user requested that a metadevice dnum be initialized when dnum is already set up.
unit is not a concat/stripe
An attempt was made to perform a concat/stripe specic operation on a metadevice that is not a concat/stripe.
unit is not a mirror
An attempt was made to perform a mirror specic operation on a metadevice that is not a mirror.
unit is not a RAID
An attempt was made to perform a RAID specic operation on a metadevice that is not a RAID device.
unit is not a trans
An attempt was made to perform a metatrans specic operation on a metadevice that is not a metatrans device.
unit not found
Some other metadevice utility is currently in progress and the lock cannot be accessed at this time. DiskSuite utilities are serialized using the /tmp/.mdlock le as a lock. If you determine that there are no other utilities currently running, you may want to remove this lock le.
171
Log Messages
The command line log messages displayed by DiskSuite are listed in alphabetical order below. Each message is always preceded with md: The variables in these messages indicate the following:
4 dev is a device name. 4 dnum is a metadevice name. 4 num is a number. 4 state is a metatrans device state. 4 trans is either logging or master.
Note - When the initial portion of a message begins with a variable, the message is
alphabetized by the rst word following the variable.
The named misc module is not loadable. It is possibly missing, or something else has been copied over it.
db: Parsing error on dev
The set command in /etc/system for the mddb.bootlist<number> is not in the correct format. Run metadb p to place the correct set commands into the /etc/system le.
The rst device name listed has been hot spare replaced with the second device name listed.
The rst device number listed has been hot spare replaced with the second device number listed.
The underlying named driver module is not loadable (for example, sd, id, xy, or a third-party driver). This could indicate that the driver module has been removed.
Open error of hotspare dev Open error of hotspare dev(num, num)
The named hot spare is not openable, or the underlying driver is not loadable.
A read or write error has occurred on the specied metadevice at the specied device name. This happens if any read or write errors occur on a metadevice.
dnum: read error on dev(num, num) dnum: write error on dev(num, num)
A read or write error has occurred on the specied metadevice at the specied device number. This happens if any read or write errors occur on a metadevice.
dnum: read error on dnum dnum: write error on dnum
A read or write error has occurred on the specied metadevice at the specied device number. This happens if any read or write errors occur on a metadevice.
173
These messages occur when there have been device errors on components where the state database replicas reside. These errors only occur when more than half of the replicas have had errors returned to them. For example, if you have three components with state database replicas and two of the components report errors, than these errors may occur. The state database commit or delete is retried periodically. If the replica is added, the commit or delete will nish and the system will be operational. Otherwise, the system will time out and panic.
This message occurs when there are not enough usable replicas for the state database to be able to update records in the database. All accesses to the metadevice driver will fail. To x this problem, add more replicas or delete unaccessible replicas.
trans device: read error on dnum trans device: write error on dnum
A read or write error has occurred on the specied logging or master device at the specied metadevice. This happens if any read or write errors occur on a logging or master device.
trans device: read error on dev trans device: write error on dev
A read or write error has occurred on the specied logging or master device at the specied device name. This happens if any read or write errors occur on a logging or master device.
174
trans device: read error on dev(num, num) trans device: write error on dev(num, num)
A read or write error has occurred on the specied logging or master device at the specied device number. This happens if any read or write errors occur on a logging or master device.
logging device: dnum changed state to state logging device: dev changed state to state logging device: dev(num, num) changed state to state
The logging device and its associated master device(s) have changed to the specied state(s).
A failed metadevice state database commit or deletion has been retried the default 100 times.
dnum: Unknown close type dnum: Unknown open type
175
176
APPENDIX
Introduction
Upgrading to later versions of the Solaris environment while using metadevices requires steps not currently outlined in the Solaris documentation. The current Solaris upgrade procedure is incompatible with DiskSuite. The following supplemental procedure is provided as an alternative to completely reinstalling the Solaris and DiskSuite packages.
Note - You must have the media to upgrade Solaris (and DiskSuite if necessary).
2. Save /etc/vfstab for later use. 3. Clear any trans metadevices that may be used during the Solaris upgrade (for example, /usr, /var, and /opt). See Solstice DiskSuite 4.2.1 Users Guide for information on clearing (removing logging from) trans metadevices. If you are uncertain which trans metadevices should be cleared, clear all trans metadevices. 4. Comment out le systems in /etc/vfstab mounted on metadevices that are not simple metadevices or simple mirrors. A simple metadevice is composed of a single component with a Start Block of 0. A simple mirror is composed of submirrors, all of which are simple metadevices. 5. Convert the remaining (simple) mirrors to one-way mirrors with the metadetach command. Upgrade will be performed on a single submirror of each mirror. The other submirrors will be synced up with metattach after the upgrade. 6. If root (/) is mounted on a metadevice or mirror, set the root (/) le system to be mounted on the underlying component of the metadevice or the underlying component of the remaining attached submirror. Use the metaroot command to do this safely. 7. Edit the /etc/vfstab le to change any le systems or swap devices still mounted on metadevices or mirrors after Step 3 on page 178. Mount the le systems on the underlying component of the metadevices or the underlying component of the remaining attached submirrors. 8. Remove symbolic links to the DiskSuite startup les so that it is no longer initialized at boot time.
demo# rm /etc/rcS.d/S35lvm.init /etc/rc2.d/S95lvm.sync
These links will be added back later by reinstalling DiskSuite after the Solaris upgrade. 9. Halt the machine and upgrade Solaris, then reboot the machine. 10. Reinstall DiskSuite, then reboot the machine. This will re-establish the symbolic links removed in Step 8 on page 178.
178
Note - Make certain that the version of Solaris you are installing is compatible
with Solstice DiskSuite 4.1.1.1.
11. If root (/) was originally mounted on a metadevice or mirror, set the root (/) le system to be mounted back on the original metadevice or mirror. Use the metaroot command to do this safely. 12. Edit the /etc/vfstab le to change any le systems or swap devices edited in Step 7 on page 178 to be mounted back on their original metadevice or mirror. 13. Edit the /etc/vfstab le to uncomment the le systems commented out in Step 4 on page 178. 14. Reboot the machine to remount the le systems. 15. Use the metattach command to reattach and resync any submirrors broken off in Step 5 on page 178. 16. Recreate the cleared trans metadevices. See Solstice DiskSuite 4.2.1 Users Guide for information on creating trans metadevices.
179
180
Glossary
To add a logging device to an existing trans metadevice. If the trans metadevice is mounted, DiskSuite attaches the log when the le system is unmounted or the system is rebooted. To add a submirror to an existing mirror. DiskSuite automatically resyncs the submirror with other submirrors. A unit of data that can be transferred by a device, usually 512 bytes long. To start a computer program that clears memory, loads the operating system, and otherwise prepares the computer. In DiskSuite Tool, a window for browsing through DiskSuite objects in list form. There is a separate browser for slices, metadevices, and hot spare pools. A group of adjacent binary digits (bits) operated on by the computer as a unit. The most common size byte contains eight binary digits. In DiskSuite Tool, the main region where DiskSuite objects are displayed and manipulated. A DiskSuite Tool command that decreases (minimizes) the size of DiskSuite objects, as shown on the canvas. A DiskSuite Tool command that commits changes that have been made to DiskSuite objects. The changes are stored in the md.cf le. See concatenation. A metadevice made of concatenated groups of striped slices. 181
block
boot
browser
byte
canvas
collapse
commit
concatenation
In its simplest meaning, concatenation refers to the combining of two or more data sequences to form a single data sequence. In DiskSuite: (1) Another word for concatenated metadevice. (2) Creating a single logical device (metadevice) by sequentially distributing disk addresses across disk slices. The sequential (serial) distribution of disk addresses distinguishes a concatenated metadevice from a striped metadevice.
conguration
The complete set of hardware and software that makes up a storage system. Typically, a conguration will contain disk controller hardware, disks (divided into slices), and the software to manage the ow of data to and from the disks. A history (log) kept by DiskSuite Tool of all top-level operations and input-validation errors during a session. Electronic circuitry that acts as a mediator between the CPU and the disk drive, interpreting the CPUs requests and controlling the disk drive. In a disk drive, the set of tracks with the same nominal distance from the axis about which the disk rotates. See also sector. To remove a logging device from a trans metadevice. To remove a submirrors logical association from a mirror. In DiskSuite Tool, a graphical view of the physical devices attached to the system. It can be used to show the relationship between the logical and physical devices. A set of disk drives containing logical devices (metadevices) and hot spares that can be shared exclusively (but not concurrently) by two hosts. Used in host fail-over solutions. In DiskSuite Tool, a graphical representation for the state database, metadevice or part of a metadevice, or hot spare pool. Software that translates commands between the CPU and the disk hardware. In DiskSuite Tool, the region of the Disk View window where any metadevice, group of metadevices, or physical device can be
conguration log
controller
cylinder
diskset
DiskSuite objects
driver
drop site
182
dragged and dropped. The physical layout of the device mappings is displayed after the metadevice is dropped on a specic color in the drop site. encapsulate To put an existing le system into a one-way concatenation. A one-way concatenation consists of a single slice. A DiskSuite Tool command that displays errors and warning messages in the conguration log for the selected metadevice. A DiskSuite Tool command that increases (magnies) the view of DiskSuite objects. A computer systems ability to handle hardware failures without interrupting system performance or data availability. Preparing a disk to receive data. Formatting software organizes a disk into logical units, like blocks, sectors, and tracks. See resyncing. (Gigabyte), 1024 Mbytes (or 1,073,741,824 bytes). In a magnetic disk drive, an electromagnet that stores and reads data to and from the platter. Controlled by a disk controller. A term describing systems that can suffer one or more hardware failures and rapidly make data access available. A slice reserved to substitute automatically for a failed slice in a submirror or RAID5 metadevice. A hot spare must be a physical slice, not a metadevice. A group of hot spares. A hot spare pool is associated with submirrors or RAID5 metadevices. In DiskSuite Tool, the region containing icons that are the source for new DiskSuite objects. Icons are used as templates to create metadevices and hot spare pools. See also templates. (1) To distribute data in non-contiguous logical data units across disk slices. (2) A value: the size of the logical data segments in a striped metadevice or RAID5 metadevice. 183
Evaluate
Expand
fault tolerance
formatting
high-availability
hot spare
icon well
interlace
See interlace. (Kilobyte), 1024 bytes. The time it takes for a disk drives platter to come around to a specic location for the read/write head. Usually measured in milliseconds. Latency does not include the time it takes for the read/write head to position itself (head seek time). A diskset that is not in a shared diskset and that belongs to a specic host. The local diskset contains the metadevice state database for that specic hosts conguration. Each host in a diskset must have a local diskset to store its own local metadevice conguration. An abstraction of something real. A logical disk, for example, can be an abstraction of a large disk that is really made of several small disks. Recording UFS updates in a log (the logging device) before the updates are applied to the UNIX le system (the master device). The slice or metadevice that contains the log for a trans metadevice. The slice or metadevice that contains an existing or newly created UFS le system for a trans metadevice. (Megabyte), 1024 Kbytes. A backup le of the DiskSuite conguration which can be used for disaster recovery. This le should not be edited or removed. It should be backed up on a regular basis. A conguration le used by DiskSuite while loading. It can be edited to increase the number of metadevices and disksets supported by the metadisk driver. A le to track the locations of state database replicas. This le should not be edited or removed. An input le that you can use with the command line interface utilities metainit(1M), metadb(1M), and metahs(1M) to administer metadevices and hot spare pools.
local diskset
logical
logging
Mbyte md.cf
md.conf
mddb.cf
md.tab
184
MetaDB object
The graphical object in DiskSuite Tool that represents the metadevice state database. The MetaDB object administers the metadevice state database and its copies (the state database replicas). A group of physical slices accessed as a single logical device by concatenation, striping, mirroring, setting up RAID5 metadevices, or logging physical devices. After they are created, metadevices are used like slices. The metadevice maps logical block addresses to the correct location on one of the physical devices. The type of mapping depends on the conguration of the particular metadevice. Also known as pseudo, or virtual device in standard UNIX terms.
metadevice
The main window for DiskSuite Tool. It provides a view of metadevices and hot spare pools in which you can graphically create, display, or edit your conguration. A database, stored on disk, that records conguration and state of all metadevices and error conditions. This information is important to the correct operation of DiskSuite and it is replicated. See also state database replica. A UNIX pseudo device driver that controls access to metadevices, enabling them to be used like physical disk slices. The metadisk driver operates between the le system and application interfaces and the device driver interface. It interprets information from both the UFS or applications and the physical device drivers. A metadevice made of one or more other metadevices called submirrors. It replicates data by maintaining multiple copies. Writing data to two or more disk drives at the same time. In DiskSuite, mirrors are logical storage objects that copy their data to other logical storage objects called submirrors. A mirror that has at least two submirrors. In DiskSuite Tool, a pseudo-browser in the Metadevice Editor window that displays metadevices, hot spares, and conguration problems. A mirror that consists of only one submirror. You create a one-way submirror, for example, when mirroring slices that contain existing data. A second submirror is then attached. 185
metadisk driver
mirror
mirroring
one-way mirror
online backup
A backup taken from a mirror without unmounting the entire mirror or halting the system. Only one of the mirrors submirrors is taken ofine to complete the backup. A resync of only the submirror regions that are out of sync at a system reboot. The metadisk driver tracks submirror regions and can determine which submirror regions are out of sync after a failure. See resyncing. In DiskSuite Tool, the region where a miniature view of the canvas shows small representations of the DiskSuite objects currently displayed on the canvas. A way for RAID5 congurations to provide data redundancy. Typically, a RAID5 conguration stores data blocks and parity blocks. In the case of a missing data block, the missing data can be regenerated using the other data blocks and the parity block. A resync of only a replacement part of a submirror or RAID5 metadevice, rather than the entire submirror or RAID5 metadevice. See full mirror resync and optimized mirror resync. See slice. On a SPARC system, a slice and partition are the same. On an x86 system, a slice and partition are distinct. A partition is a part of a disk set aside for use by a particular operating system using the fdisk program. Thus partitioning the disk enables it to be shared by several different operating systems. Within a Solaris partition, you can create normal Solaris slices.
Panner
parity
partition
The spinning disk that stores data inside a disk drive. A DiskSuite Tool command that returns DiskSuite objects on the Metadevice Editor window canvas to the Objects list. Redundant Array of Inexpensive Disks. A classication of different ways to back up and store data on multiple disk drives. There are seven levels of RAID: Level 0: Nonredundant disk array (striping) Level 1: Mirrored disk array Level 2: Memory-style Error Code Correction (ECC) Level 3: Bit-Interleaved Parity
RAID
186
Level 4: Block-Interleaved Parity Level 5: Block-Interleaved Distributed-Parity Level 6: P + Q Redundancy DiskSuite implements RAID levels 0, 1, and 5. resync region A division of a mirror that enables tracking changes by submirror regions rather than over the entire mirror. Dividing the mirror into resync regions can reduce resync time. The process of preserving identical data on mirrors or RAID5 metadevices. Mirrors are resynced by copying data from one submirror to another after submirror failures, system crashes, or after adding a new submirror. RAID5 metadevices are resynced during reboot if any operations that may have been halted from a system panic, a system reboot, or a failure to complete are restarted. SCSI Small Computer Systems Interface. An interface standard for peripheral devices and computers to communicate with each other. The smallest divisions of a disk platters tracks. Usually 512 bytes. See block. The time it takes for a disk drives read/write head to nd a specic track on the disk platter. Seek time does not include latency nor the time it takes for the controller to send signals to the read/write head. See diskset. A term usually reserved for a concatenated metadevice, striped metadevice, or concatenated stripe metadevice. A part of each physical disk that is treated as a separate area for storage of les in a single le system, or for an application such as a database. Before you can create a le system on disk, you must partition it into slices. In DiskSuite Tool, a menu available from the Disk View window and the Slice Browser that lters the slices to view those available to be parts of metadevices, hot spares, state database replicas, and trans metadevice logs. 187
resyncing
sector
seek time
slice
A copy of the metadevice state database. Keeping copies of the metadevice state database protects against the loss of state and conguration information critical to metadevice operations. (1) A metadevice created by striping (also called a striped metadevice). (2) An interlaced slice that is part of a striped metadevice. (3) To create striped metadevices by interlacing data across slices.
stripe
striping
Creating a single logical device (metadevice) by transparently distributing logical data segments across slices. The logical data segments are called stripes. Striping is sometimes called interlacing because the logical data segments are distributed by interleaving them across slices. Striping is generally used to gain performance, enabling multiple controllers to access data at the same time. Compare striping with concatenation, where data is mapped sequentially on slices.
A metadevice that is part of a mirror. See also mirror. A le used to set system specications. DiskSuite uses this le, for example, when mirroring the root (/) le system. In DiskSuite Tool, the template icons create new, empty metadevices. The new metadevices cannot be used until they are populated with their necessary parts. Templates can also be combined to build additional metadevices. (Terabyte), 1,024 Gbytes, or 1 trillion bytes (1,099,511,627,776 bytes). A mirror made of three submirrors. This conguration enables a system to tolerate a double-submirror failure. You can also do online backups with the third submirror. A metadevice for UFS logging. A trans metadevice includes one or more other metadevices or slices: a master device, containing a UFS le system, and a logging device. After they are created, trans metadevices are used like slices. A mirror made of two submirrors. This conguration enables a system to tolerate a single-submirror failure.
trans metadevice
two-way mirror
188
UNIX le system. The process of recording UFS updates in a log (the logging device) before the updates are applied to the UNIX le system (the master device).
189
190
Index
B
Browser Windows 93, 96
D
Device Statistics Window 9, 74, 75, 77 dialog box error messages 131, 133 information messages 146, 147 warning messages 140, 146 Disk Information Window 9, 70 to 72 and SPARCstorage Array 9, 72, 74 Disk View Window canvas 66 color drop sites 65 legend 67 messages 152, 153 overview 64, 68 panner 67 representation of objects on the canvas 11, 67 setting lters 67 diskset adding disks to 106 administering 108, 109 denition 30 disk drive device name requirements 107 example with two shared disksets 107 hardware requirements 107 inability to use with /etc/vfstab le 106 intended usage 106 maximum number 107 naming conventions 106 placement of replicas 106 relationship to metadevices and hot spare pools 106
C
canvas 64 color drop sites 11, 66 command line utilities 60 Concat Information Window 9, 75 to 77 concatenated metadevice denition 32 example with three slices 33 expanding UFS le system 32 limitations 33 maximum size 33 naming conventions 32 usage 32 concatenated stripe dening interlace 36 denition 36 example with three stripes 36 usage 36 concatenation 32 guidelines 118 Conguration Log Window 12, 100 conguration planning guidelines 118 overview 117 trade-offs 118 conrmation dialog box 99 Controller Information Window 10, 91, 92, 95 Controllers List 65
191
releasing 109 requirements for creating 107 reservation behavior 109 reservation types 109 reserving 109 single-host congurations 107 Solstice HA 106 support for SPARCstorage Array disks 105 usage 105 DiskSuite objects nding associated mount point 98 locating on the Metadevice Editor canvas 98 overview 20 DiskSuite Tool and using the mouse 60 canvas 63 event notication 102, 103 help utility 12, 101 overview 18, 19, 59, 60 panner 64 starting 60 Tools menu 62, 65, 102 vs. the command line 9, 60, 72
G
general performance guidelines Grapher Window 68 growfs(1M)command 19, 28 122
H
hot spare 54 attaching 86 conceptual overview 54 enabling 86 removing 86 replacement algorithm 55 replacing 86 size requirements 55 Hot Spare Information Window 9, 84 to 86 hot spare pool 27 administering 57 associating 56 basic operation 27 conceptual overview 53, 55 conditions to avoid 56 denition 21, 27 empty 55 example with mirror 56 maximum number 55 naming conventions 55 status 85
E
error dialog box 99 error messages 157, 171 and format 157 indication of variables 158 /etc/lvm/md.cf le 29, 111 /etc/lvm/md.tab le 29, 111 /etc/lvm/mddb.cf le 29 /etc/lvm/mdlogd.cf le 29 /etc/opt/SUNWmd/mdlogd.cf le /etc/rc2.d/S95lvm.sync le 30 /etc/rcS.d/S35lvm.init le 30
I
I/O 123 information dialog box 99 Information Windows Concat 75 Controller 12, 92 Hot Spares 84 Metadevice State Database 12, 89 Mirror 79 RAID 12, 87 Stripe 12, 78 Trans 82 Tray 12, 91 interlace changing the value on stripes 35, 79, 88 default 35 denition 35
19
F
failover conguration 30, 105 le system expansion overview 28 guidelines 121 Finder Window 12, 98
192
specifying
35
K
/kernel/drv/md.conf le 29
L
local diskset 106 log messages 153, 157 and types 153, 157 notice 153, 172 panic 156, 175 warning 154, 156, 173, 175 logging device denition 49 placement 50 shared 49, 50 space required 49 status 84 trade-offs 126
M
majority consensus algorithm 25 master device denition 49 status 84 md.cf le 116 md.tab le creating a concatenated metadevice 113 creating a concatenated stripe 114 creating a hot spare pool 116 creating a mirror 114 creating a RAID5 metadevice 115 creating a striped metadevice 113 creating a trans metadevice 115 creating state database replicas 112 overview 111 mdlogd(1M) daemon 19 metaclear(1M)command 19 metadb(1M)command 19 metadetach(1M)command 19 metadevice conceptual overview 21 default names 23 denition 21 expanding disk space 27 maximum possible 23
naming conventions 23 types 21 uses 22 using le system commands on 22 virtual disk 18 Metadevice Editor Window locating objects 98 messages 147, 152 overview 61, 64 metadevice state database 24 conceptual overview 24, 26 corrupt 26 denition 21, 24 Metadevice State Database Information Window 10, 88, 90, 95 metadisk driver 21 metahs(1M)command 19 metainit(1M)command 19 metaofine(1M)command 19 metaonline(1M)command 19 metaparam(1M)command 19 metarename(1M)command 19 metareplace(1M)command 20 metaroot(1M)command 20 metaset(1M)command 20 metastat(1M)command 20 metasync(1M)command 20 metatool(1M)command 18, 20 metatool-toolsmenu(4) le 102 metattach(1M)command 20 mirror 38 denition 22 example with two submirrors 40 maximum number of submirrors 40 naming convention 39 options 41, 79 performing online backup 39 resynchronization 41, 42 usage 38 Mirror Information Window 9, 79 to 81 mirror read policies 81 mirror write policies 82 mirroring availability considerations 40 guidelines 119 read and write performance 119 tolerating multiple slice failure 43 193
mouse
60
N
notice log messages 153, 172
O
Objects List 63
S
sequential I/O 124 shared diskset 30 simple metadevice and starting blocks 38 denition 22, 31, 32 types 31 usage 32 Slice Browser 93 Slice Filter Window 10, 12, 97, 98 Slice Information Window 9, 72 to 74 SPARCstorage Array and the Controller Information Window 10, 93, 95 and the Disk Information Window 9, 72, 74 and the Tray Information Window 90 battery status 93 disk status 71 fan status 93 fast writes 72 rmware revision 9, 72, 74, 93 maintaining 60 starting a disk 71 stopping a disk 71 starting DiskSuite Tool 59 state database replicas 25 attaching 90 basic operation 25 creating multiple on a single slice 27 creating on metadevice slice 27 default size 26 denition 25 errors 27 guidelines 121 location 25, 26, 128 maximum number 26 minimum number 26 recommendations 127 removing 90 replacing 90
P
panic log messages 157, 175 pass (resync) mirror option 81 pass number and read-only mirror 42 dened 42 performance monitoring 68 Problem List Window 12, 100
R
RAID levels supported in DiskSuite 44 RAID Information Window 10, 86 to 88 RAID5 metadevice attaching a slice 88 denition 22, 44 enabling a slice 88 example with an expanded device 46 example with four slices 45 expanding 45 full stripe writes 125 guidelines 120 initializing slices 45 minimum number of slices 45 naming convention 45 parity information 44, 47 performance in degraded mode 126 performance vs. striped metadevice 122 read performance 120 removing slice 88 replacing a slice 88 resyncing slices 45 usage 45 write performance 120 random I/O 123 read policies overview 9, 43, 72 replica 25 194
restoring 90 two-disk conguration 128 usage 25 Statistics Graph Window 68 overview 68, 69 stripe 34 Stripe Information Window 9, 77 to 79 striped metadevice denition 34 example with three slices 35 limitations 34 performance vs. RAID5 metadevice 122 usage 34 striping 34 compared to concatenation 34 guidelines 118 trade-offs 125 submirror 39 and simple metadevices 39 attaching 40, 82 bringing online 82 denition 39 detaching 40 naming convention 39 operation while ofine 39 replacing 82 taking ofine 82 system les 29, 30
Trans Information Window 9, 82 to 84 trans metadevice 49 denition 22, 49 determining le systems to log 49 example with mirrors 50 example with shared logging device 51 naming conventions 49 usage 49 Tray Information Window 10, 90, 91, 95
U
UFS logging 48 and le systems 48 and system performance 48 denition 48 upgrading Solaris 177, 179 /usr/lib/lvm/X11/app-defaults/Metatool le 66
V
variables in error messages 158
W
warning dialog box 99 warning log messages 154, 156, 173, 175 write policies overview 9, 43, 72
T
template icons 63
195