Configuration Planning Guide EonStor v1.1b PDF
Configuration Planning Guide EonStor v1.1b PDF
0 (1, 2006)
Japan Germany
Infortrend Japan, Inc. Infortrend Deutschland GmbH
6F, Okayasu Bldg., Werner-Eckert-Str.8
1-7-14 Shibaura Minato-ku, 81829 Munich Germany
Tokyo, 105-0023 Japan Tel: +49 (0)89 45 15 18 7 - 0
Tel: +81-3-5730-6551 Fax: +49 (0)89 45 15 18 7 - 65
Fax: +81-3-5730-6552 [email protected]
[email protected] [email protected]
[email protected] http://www.infortrend.com/germany
http://esupport.infortrend.com.tw
http://www.infortrend.co.jp
Copyright 2008
This Edition First Published 2008
All rights reserved. No part of this publication may be reproduced, trans-
mitted, transcribed, stored in a retrieval system, or translated into any
language or computer language, in any form or by any means, electronic,
mechanical, magnetic, optical, chemical, manual or otherwise, without the
prior written consent of Infortrend Technology, Inc.
Disclaimer
Infortrend Technology makes no representations or warranties with respect to
the contents hereof and specifically disclaims any implied warranties of
merchantability or fitness for any particular purpose. Furthermore, Infortrend
Technology reserves the right to revise this publication and to make changes
from time to time in the content hereof without obligation to notify any person
3
of such revisions or changes. Product specifications are also subject to
change without notice.
Trademarks
Infortrend, Infortrend logo, EonStor and SANWatch are all registered
trademarks of Infortrend Technology, Inc. Other names prefixed with “IFT”
and “ES” are trademarks of Infortrend Technology, Inc.
Table of Contents
Contact Information ................................................................................................. 3
Copyright 2008.......................................................................................................... 3
This Edition First Published 2008 ...................................................................... 3
Disclaimer .......................................................................................................... 3
Trademarks ........................................................................................................ 4
Table of Contents ..................................................................................................... 4
Organization of this Guide ...................................................................................... 5
Revision History ....................................................................................................... 5
Related Documentations ......................................................................................... 5
4
5. Opening a Management Console: ........................................................................... 31
6. Creating RAID Elements ........................................................................................ 37
Chapter 2 Describes RAID levels and logical drives (also termed as RAID
groups or arrays) and how they provide fault tolerance and
combined performance.
Revision History
Rev. 1.0: Initial release
Rev. 1.1: - Removed JBOD from the RAID level introduction. NRAID
provides similar functionality.
Related Documentations
Firmware Operation Manual
5
SANWatch User’s Manual
These documents can be found in the product utility CD included with your
system package and are continuously updated according to the progress of
technologies and specification changes.
6
Chapter
1
Host Interface and Storage
Configuration Basics
7
SAN
Storage Area Network. Refers to
configurations that include storage
systems connected through Fibre
Channel data links. SAN
configurations often include
interconnect hardware such as fabric
switches. Fibre Channel SAN can
span across an entire enterprise or
further and enable the connections to
almost limitless number of application
servers in a storage network.
IP SAN
8
1-3. Host Link Components:
Storage-side components:
Host ports:
1. SAS links for DAS:
There are two different kinds of SAS ports: SFF-8088 and
SFF-8470; both are multi-lane wide ports.
9
Fibre Channel host ports are SFP sockets that receive
separately purchased Fibre Channel transceivers. The
transceiver converts electrical signals into optical signals and
transmits data over fiber optical links.
Fibre Channel optical Fiber optical cable (LC-to-LC):
transceiver:
10
1-4. Cabling Host Ports and
Theories behind Topologies:
Shown below are the basics about cabling systems with the
single- and redundant-controller configurations.
Legends
HBA: Host bus adapter CH0: Host channel 0
LD: Logical drive; logical group of 6, CH1: Host channel 1
8, or other number of disk drives.
AID: e.g., A112; a host ID managed RCC: The communications paths
by controller A between controllers
BID: e.g., B113; a host ID managed FC Fibre Channel switch that
by controller B switch provides intermediate connectivity
to form a storage area network.
FC switches also provide access
control such as zoning.
LUN Host LUN mapping is presented NOTE:
Mapping: by the encircled numbers either 1. The samples below are made with the
placed by the LD or on the data Fibre Channel connectivity.
paths.
2. The default host IDs can vary on the
Controller The RAID controllers within EonStor models:
storage system. Controllers are FC 112 and 113
identified as controller A or
SAS 0, 1 (single controller)
controller B.
6, 7 (dual-controller)
iSCSI 0, 1 (single controller)
6, 7 (dual-controller)
11
1-4-1. Calculating an Approximate Storage Performance:
12
The LD performance can roughly fill a 4Gbps Fibre host channel.
Multi-pathing Driver:
With the EonPath multi-pathing driver, traffic on multiple host
links can be balanced by presenting a logical drive on them.
You can fully utilize the powerful engine in the EonStor series
through the configuration means.
A combination of 32 HDD in a RAID and a JBOD can theoretically
make a best use of the power of a 16-bay redundant controller
system:
13
For the fact that your application servers may not always
generate I/Os that fully stress the arrays, more disk drives can
be attached. In a storage configuration, logical drives, host LUN
mapping, and other configurations can be re-arranged, if the
nature of host applications and data has been changed
throughout the time of use.
Other Considerations:
For high-speed I/O channels, use host bus adaptors that are
at least with a PCI-X x8 lane. Using outdated HBAs on a
narrow bus can hinder the best host-storage performance.
For a higher level of fault tolerance, say, if you connect 4
host links from redundant RAID controllers, use dual-ported
HBAs for making the connections instead of linking all 4
ports to a quad-ported HBA.
Perform throughput testing on the whole deployment before
starting your applications.
Understand and fine-tune your I/Os. Create logical drives to
your needs for performance, fault tolerance, or for both.
Some minor details, such as HBA BIOS settings and queue
depth configurations, can be important but are easily
ignored.
14
1-4-4. Redundant-controller storage in a switched fabric:
Preparing a redundant-controller system
requires both AID and BID. Resource
distribution is also determined by Logical
Drive Assignment. If a logical drive is
assigned to controller A, then controller A
manages the I/Os to that logical drive.
Elements in this drawing are:
LD: Logical drives are configured by
grouping physical drives.
LD assignment: Each logical drive is
either assigned to controller A or to
controller B.
ID Mapping: Logical drives are mapped
to IDs on all host channels to leverage all
host port bandwidth.
Infortrend firmware comes with 1 host ID
on each channel. You need to manually
create more IDs.
NOTE:
1. Multiple IDs on a Fibre Channel host channel is not allowed if
they are configured into the “point-to-point” mode.
The maximum number of LUN is:
Point-to-point: 4 (host channels) x 1 (IDs per channel) x 32
(LUNs per ID) = 128
FC-AL: 4 (host channels) x 8 (IDs per channel) x 32 (LUNs
per ID) = 1024
You can seldom use the maximum number, and having too
many LUN can cause a performance drag.
2. It is recommended to set your storage and switch ports to the
loop mode (FC-AL). In some circumstances with
cabling/controller failures, a server may not regain the access
to storage through a switch port configured in the fabric mode
(point-to-point).
15
1-4-5. Redundant-controller storage for dedicated performance:
Some storage applications may not
require high level of fault tolerance,
e.g., AV post-production editing.
Elements in this drawing are:
LD: Logical drives are configured by
grouping physical drives.
LD assignment: Each logical drive
is either assigned to controller A or
to controller B.
ID Mapping: Logical drives are
mapped to IDs on all host channels
to leverage all host port bandwidth.
Infortrend firmware comes with 1
host ID on each channel. You need
to manually create more IDs.
NOTE:
The sample topologies in this document do not cover the cases
of using the onboard hub (onboard FC bypass) such as those
applied in the ASIC266 models. The onboard hub turns host
ports of partner RAID controllers into a host loop.
16
1-4-6. Redundant-controller, high availability, for clustered servers:
Provides shared storage for high
availability clustered servers.
Elements in this drawing are:
LD: Logical drives are configured
by grouping physical drives.
LD assignment: Each logical drive
is either assigned to controller A or
to controller B.
ID Mapping: Logical drives are
mapped to IDs on all host channels
to leverage all host port bandwidth.
The IDs in green circles are stand-
by IDs. The stand-bys provide
alternate access in the event when
the controller having the original
ownership fails.
Infortrend firmware comes with 1
host ID on each channel. You
need to manually create more IDs.
17
1-4-7. One controller failed in a redundant-controller storage:
Elements in this drawing are:
Controller failure: Controller B fails. All
AID and BID are taken over by controller
A, the surviving controller.
Disk Access: LD1 is accessed through
the alternate data paths on the
backplane.
The failover process takes only a few
seconds and is transparent to users.
18
1-4-8. Cable link failure. Before Dynamic LD Assignment with FW3.64J, a cabling
failure can cause a degraded performance in the scenario diagrammed below.
19
1-4-9. Dynamic Switch of LD Ownership in a redundant-controller storage:
Dynamic LD Assignment can dramatically improve system performance in the same
cabling failure scenario.
Since firmware revision 3.64J, LD
ownership can be temporarily
shifted to the partner controller to
avoid the overhead of re-directing
I/Os through the RCC links. The
LD0 ownership is temporarily
handed to Controller B.
20
1-4-10. The Active and Passive path mechanism to a redundant-controller
storage:
The data path’s Active/Passive
status is determined by the
logical drive ownership. If a
logical drive (LD0) is assigned to
controller A, the data paths to
controller A are considered as
the Active or optimal paths for
the access to LD0. I/Os will be
distributed through the Active
paths.
21
Chapter
RAID Levels
Redundant Arrays of Independent Disks, or RAID, offers the
following advantages: availability, capacity, and performance.
Choosing the right RAID level and drive failure management can
increase capacity and performance, subsequently increasing
availability. Infortrend's external RAID controllers and subsystems
provide complete RAID functionality and enhanced drive failure
management.
22
NOTE:
Logical volumes, such as RAID50, can provide a higher level of fault
tolerance than RAID5. However, the use of logical volumes is not
always necessary. Using logical volumes can create the load on
system hardware and may not be the optimal for most applications.
Sample Applications
RAID Level Performance Sequential
RAID0 RAID0 can deliver the best performance, but please be
reminded it provides no protection to your data. RAID0
is ideal for applications needing a temporary data pool
for high-speed access.
RAID1 (0+1) RAID1 is useful as a small group of drives pertaining
high availability and fast write access although it is
expensive in terms of its usable drive capacity.
RAID3 RAID3 works well with single-task applications featuring
large transfers such as video/audio post-production
editing, medical imaging, or scientific research requiring
a purpose-oriented performance.
RAID5 RAID5 is most widely-used and is ideal for a media,
legal, or financial database repository with lower write
requests. RAID5 can adapt to multi-task applications
with various I/O sizes. A RAID5 with an adequate stripe
size is also applicable with large I/O transfers.
RAID6 RAID6 provides a high level of data availability, benefits
of RAID5, with the minor trade-off of a slightly lower
write performance. RAID6 can mend the defects of
using cost-effective SATA drives where magnetic
defects can cause problems if another member drive
fails at the same time.
23
RAID Levels in Details
Minimum Disks 1
required
Capacity N
Redundancy No
Minimum Disks 2
required
Capacity N
Redundancy No
24
RAID1 - Disk Mirroring
RAID1
Minimum Disks 2
required
Capacity N/2
Redundancy Yes
Minimum Disks 4
required
Capacity N/2
Redundancy Yes
25
IMPORTANT!
“RAID (0+1)” will not appear in the list of RAID levels supported by
the controller. If you wish to perform RAID1, the system firmware will
determine whether to perform RAID1 or RAID (0+1). This will depend
on the number of disk drives selected to compose a logical drive.
Minimum Disks 3
required
Capacity N-1
Redundancy Yes
Minimum Disks 3
required
Capacity N-1
Redundancy Yes
26
RAID6 - Striping with Redundant (P+Q) Parity Scheme
RAID6
NOTE: A RAID6 array can withstand simultaneous Minimum Disks required 4
failures of two disk drives, or one drive failure and
bad blocks on another member drive. Capacity N-2
Redundancy Yes
RAID6 is similar to RAID5 but two parity blocks are available within
each data stripe across the member drives. Each RAID6 array uses
two (2) member drives for storing parity data. The RAID6 algorithm
computes two separate sets of parity data and distribute them to
different member drives when writing to disks. A RAID6 array
requires the capacity of two disk drives for storing parity data.
Each disk drive contains the same number of data blocks. Parity
information is consequentially interspersed across the array following
the preset algorithms. A RAID6 array can tolerate the failure of more
than one disk drive; or, in the degraded condition, one drive failure
and bad blocks on the other. In the event of disk drive failure, the
controller can recover or regenerate the lost data of the failed drive(s)
without interruption to normal I/Os.
27
Chapter
28
2. Use Worksheets to keep a hard record of how your storage is
configured. An example is shown below:
Application File system RAID level of LUN ID LUN capacity Server Host links
LUN details (OS) info. (HBA,
switch, etc.)
You can expand the worksheet to include more details such as the
disk drive channel on which the disks reside, JBOD enclosure ID,
whether the LUNs are shared, and shared by which servers, etc.
3. Drive Location:
Tray Numbering:
The same disk tray layout always applies to all Infortrend’s storage
enclosures. Trays are numbered, from left to right and then from top
to bottom. It is advised you select members for a logical drive
following the tray numbering rule, in order to avoid confusing yourself
using the LCD keypad or the text-based firmware utility.
29
For example, a typical single enclosure configuration can look like
this:
Step 1. Use the included serial cable to connect the COM1 serial
ports. COM1 is always located on the RAID controllers.
30
Step 2. If your system is powered by a single RAID controller,
connect the single end-to-end cable.
31
Step 3. The next screen requires you to select a serial port on
your PC.
32
Step 5. The initial screen for the text-based utility should
display.
33
Step 3. Consult your network administrator for an IP address
that will be assigned to the system Ethernet port.
Step 4. Use the LCD keypad or RS-232 console to select
"View and Edit Configuration Parameters" from the
Main Menu on the terminal screen. Select
"Communication Parameters" -> "Internet Protocol
(TCP/IP)" -> press ENTER on the chip hardware
address -> and then select "Set IP Address."
NOTE:
The IP default is “DHCP client.” However, if DHCP server can
not be found within several seconds, a default IP address
“10.10.1.1” will be loaded. This feature is available in the EonStor
ASIC400 models.
34
NOTE:
A management console using SANWatch or the web-based
Embedded RAIDWatch is not the topic of this document. Please
refer to their specific user documents for details.
For making SSH link using Windows, there are SSH tools such as
the “PuTTY” shareware.
35
Character set translation setting:
Appearance menu:
36
6. Creating RAID Elements
Step 1. Make sure all physical drives are properly installed by
checking the View and Edit Drives menu.
Step 2. Use the ESC key to return to the Main Menu. Now you
can go to the View and Edit Logical Drives menu to
begin RAID configuration.
37
Step 3. Select an index number by pressing Enter on it, usually
the configuration starts from LG0. Confirm your
selection by moving highlighted area to Yes and press
Enter.
38
the enclosure and also the performance concerns
mentioned earlier in this document.
Step 6. Press the ESC key when you have selected all
members. An LD parameters window will prompt.
Step 6-1.
Step 6-2.
39
Step 6-3.
Step 6-4.
Step 6-5.
Step 6-6.
The Online Initialization Mode allows you to continue
with the rest of the system setup steps without having
to wait for the logical drive to be fully initialized.
Initializing an LD terabytes in size can take hours.
40
Step 6-7.
The default stripe size (128KB) is applicable to most
applications. The stripe size can be adjusted in
situations when the I/O characteristics are predictable
and simple. For example, logical drives in a RAID
system serving an AV stream editing application have
a dedicated purpose. In such environment, you can
match the size of host I/O transfers to the LD stripe
size so that 1 or 2 host I/Os can be efficiently served
within a parallel write.
Step 7. Press the ESC key once you have set all configurable
details. A confirm message box will prompt. Check the
details before moving to the Yes option. Press Enter
on Yes to begin the creation process.
41
Step 9. Press ESC to hide this progress indicator. The
progress bar will run in the background. If the online
mode was selected, you can continue with the rest of
the procedure, such as host LUN mapping.
Step 10. You should return to the “View and Edit Logical Drives”
screen. Press Enter on the LD you just created, and
select “Logical Drive Name.” Enter a name for ease of
identification, such as “ExchangeServer.”
NOTE:
You may divide a logical drive or logical volume into partitions of
desired capacity, or use the entire capacity as a single volume.
42
Step 11. Select another entry in the LD list to repeat the process
to create more logical drives using the methods
described above.
Step 12. Create more host IDs in the “View and Edit Channels”
menu.
Step 12-1.
Press Enter to select a host channel.
Step 12-2.
Press Enter on View and edit SCSI ID.
Step 12-3.
Press Enter on any of the existing IDs.
Step 12-4.
Press Enter to add host channel IDs.
43
Step 12-5.
Select Slot A or Slot B controller. Slot A and Slot B
determines ownerships of logical drives. A logical drive
associated with a Slot A ID will be managed by the Slot
A controller (controller A); one associated with a Slot B
ID by the Slot B controller.
Step 12-6.
Select an ID from the pull-down list.
Step 12-7.
Confirm the Add action by selecting Yes, and continue
the Add ID process by selecting No. Repeat the
process to create more AIDs or BIDs as is planned for
your configuration.
Step 14. Reset the controller after you created all the AIDs and
BIDs you planned for your configuration.
Step 15. A reset may take several minutes. Enter the View and
Edit Host LUNs menu.
44
Step 16. Press Enter on a host ID. It is now necessary to refer
to the topology plan you made previously. The below
example makes for a dedicated DAS topology.
45
The complete LUN mapping steps are as follows:
46
Step 17. Repeat the mapping process until you present all your
LDs properly on the host busses according to your
application plan.
Step 18. You should then see the volumes on your application
server (using Windows Server 2003 as an example).
47
Configure and initialize the 2 LDs in the Disk
Management window.
NOTE:
Make sure the firmware on your subsystem is EonPath
compatible. Some earlier firmware revision, e.g., 3.42, may not
work with EonPath.
48
TIPS:
1. For the answers to some difficulties you might encounter
during the initial configuration process, you can refer to
Infortrend’s website, the Support -> FAQ sections.
49
Appendix
Tunable Parameters
Fine-tune the subsystem and the array parameters for your host
applications. Although the factory defaults guarantee the
optimized operation, you may refer to the table below to facilitate
tuning of your array. Some of the performance and fault-tolerance
settings may also be changed later during the preparation process
of your disk array.
Use this table as a checklist and make sure you have each item
set to an appropriate value.
(3) Non-critical
50
(2) Periodic Drive Check Disabled Disabled, 0.5 to 30 seconds
Time
Note this option is not necessary
in models using serial drive
busses such as SAS or Fibre.
(2) Rebuild Priority Normal Low, normal, improved, high
Controller:
(1) Channel Mode * Host, Drive, RCCOM, Drive + RCCOM
(RCC options not configurable in the
ASIC 400 models)
(1) Host and Drive Channel IDs * * preset
(1) Controller Unique Preset on Hex number from 0 to FFFFF (FW
Identifier most models 3.25 and above)
(2) Data Rate Auto Depends on problems solving
(1) Date and Time N/A
(1) Time Zone + 8 hrs
Optimization:
(1) Write-back Cache Enabled Disabled
(1) LD Stripe Size Related to controller general 32KB to 1024KB
setting & application I/O
characteristics
(2) Adaptive Write Policy Disabled Enabled
(2) LD Write Policy LD-specific or dependent on W/B or W/T
system’s general setting
51
for specific
SATA disk
drives
Data Integrity:
(3) Task Scheduler N/A Execute on initialization
Start time and date
Execution period
Media scan mode
Media scan priority
Select Logical drive
Array Configuration:
(1) Disk Reserved Space 256MB
(1) AV Optimization Disable Fewer Streaming
Mode Multiple Streaming
(1) Max Drive Response Disabled 160, 320, or 960ms
Timeout
(2) Array Assignment Primary Secondary controller
controller
(1) Array Partitioning 1 Up to 64
(1) Auto-assign Global disabled enabled
Spare
Enclosure Monitoring:
(2) Event Triggered N/A Controller, fan, PSU, BBU, UPS,
Operation and elevated temperature
Auto-shutdown: 2 mins~1 hour
(1) Thresholds for CPU temp: User-defined; do not change
Voltage and 0~90˚C parameters unless necessary
Temperature Self- Board temp:
Monitoring 0~80˚C
3.3V: 2.9~3.6V
5V: 4.5~5.5V
12V: 10.8~13.2V
Others:
(3) Password N/A User-Defined; Password Validation
Timeout: 1 second to Always
Check Configurable
(3) LCD Display N/A User-defined
Controller Name
(1) UPS support N/A COM2 baud rate and related
settings; event triggered operation
(1) Cylinder/Head/ Sector Variable Depends on host OS
Mapping
52
Supported RAID Configurations on
Both Sides of the 1GB Threshold
Feature Default Value
< 1GB DIMM >= 1GB DIMM
64-bit LBA support (>2TB) Yes Yes
Number of LDs 16 (max.) 32 (max.)
Number of LVs (Logical Volume) 8 (max.) 16 (max.)
Number of Partitions per LD 16 (max.) 64 (max.)
Number of LUNs per channel ID 8 (32 max.) 8 (32 max.)
Number of LUNs 128 (max.) 1024 (max.)
Caching Mode Write-back
Stripe size, RAID5 128KB
Auto-assign Global Spare Disabled
Max LD capacity 64TB max.
No. of Media Scan Task by scheduler 16 max.
Max. No. of member drives per DIMM size, RAID5 128 HDD/512MB
NOTE:
A maximum of 128 members in a logical drive is a theoretical number. Rebuilding or
scanning such a logical drive takes a long time.
53
Appendix
2
Protection by Hot Spares
Infortrend’s firmware provides the flexibility with three different
kinds of hot spare drives:
• Local (dedicated) Spare
• Enclosure Spare
• Global Spare
When any drives fail in a RAID1, 3, 5, 6 logical drive, the hot
spares automatically proceeds with online rebuild. This paper
shows how these three types function and introduces related
settings.
54
The mechanism above shows how the controller’s embedded
firmware determines whether to use Local, Enclosure, or
Global Spares to rebuild a logical drive.
55
Every disk drive that is not included in logical drives will be
automatically configured into Global spares.
56
Having members across different enclosures may not bring ill
effects on logical drive operation, however, it is easy to forget
the locations of member drives and thus the chance of making
mistakes will increase. For example, you may replace a wrong
drive and destroy a logical drive when the logical drive is
already in a degraded mode (having one failed member).
57
This page is intentionally left blank
58