HP StoreVirtual VSA Design and Configuration Guide
HP StoreVirtual VSA Design and Configuration Guide
configuration guide
For solutions based on Microsoft Hyper-V and
VMware vSphere
Contents
Introduction ...................................................................................................................................................................................................................................................................................................................................................3
Design considerations .........................................................................................................................................................................................................................................................................................................................4
Step 1: Performance, capacity, and availability requirements ..................................................................................................................................................................................................................5
Step 2: Storage options and configuration ................................................................................................................................................................................................................................................................8
Step 3: Network considerations........................................................................................................................................................................................................................................................................................ 16
Step 4: Server requirements ................................................................................................................................................................................................................................................................................................ 20
Upgrading the StoreVirtual VSA configuration ................................................................................................................................................................................................................................................. 22
HPE StoreVirtual VSA on VMware vSphere................................................................................................................................................................................................................................................................ 25
Preparation for StoreVirtual VSA .................................................................................................................................................................................................................................................................................... 26
Deploying StoreVirtual VSA on vSphere.................................................................................................................................................................................................................................................................. 30
Advanced management .......................................................................................................................................................................................................................................................................................................... 33
HPE StoreVirtual VSA on Microsoft Hyper-V............................................................................................................................................................................................................................................................ 34
Enterprise remote office or medium-sized business configuration................................................................................................................................................................................................. 34
Preparation for StoreVirtual VSA .................................................................................................................................................................................................................................................................................... 35
Deploying StoreVirtual VSA on Hyper-V ................................................................................................................................................................................................................................................................ 37
Advanced management .......................................................................................................................................................................................................................................................................................................... 40
Summary ......................................................................................................................................................................................................................................................................................................................................................41
Frequently asked questions ....................................................................................................................................................................................................................................................................................................... 41
Appendix A: Bill of materials ...................................................................................................................................................................................................................................................................................................... 42
Bill of material for three-node HPE ProLiant DL380p Gen8 VSA with vSphere ................................................................................................................................................................ 42
Bill of material for 3-node DL360p Gen8 VSA with Microsoft Hyper-V .................................................................................................................................................................................... 43
Appendix B: Mass deployment of StoreVirtual VSAs on VMware vSphere.................................................................................................................................................................................... 43
Page 3
All these factors have shown a dramatic increase in demand for virtual storage appliances to resolve these issues better.
The software-defined primary storage offering, HPE StoreVirtual VSA, is a virtual storage appliance that creates highly available shared storage
from direct-attached storage in VMware vSphere and Microsoft Hyper-V environments. Its platform flexibility allows you to create a virtual
array within and across any x86 server and non-disruptively scale capacity and performance as workload requirements evolve. The ability to use
internal or external storage within your environment greatly increases storage utilization and eliminates the costs and complexity associated with
dedicated external SAN storage. Storage pools based on StoreVirtual VSA are optimized for vSphere and Hyper-V and typically scale linearly
from 2 up to 16 nodes (see figure 1).
StoreVirtual VSA
and virtual machines
Server
with hypervisor
RAID-protected
storage
Dedicated
network segment
Figure 1. Combining capacity and performance of individual StoreVirtual VSAs into a StoreVirtual Cluster
Two network
interfaces for
storage
Page 4
With the wide adoption of virtualization and the evolution of software-defined storage technologies, it is important to understand the decision
points within the infrastructure that can directly influence your workloads. A key benefit of software-defined storage is the flexibility to build a
system to match your workload requirements. HPE offers pre-configured solutions in the form of HPE ConvergedSystem 200-HC StoreVirtual
System. This solution arrives pre-configured with servers, storage, networking, and VMware vSphere to enable complete deployment of a
virtualized environment in 15 minutes. This document is intended to help accelerate the decision-making process when designing solutions
based on StoreVirtual VSA by outlining the different components and hypervisor configurations you have to choose from.
Target audience
This paper is for server and storage administrators looking for guidance in deploying storage solutions based on HPE StoreVirtual VSA. This
paper assumes you are familiar with the architecture and feature set of the StoreVirtual portfolio.
Design considerations
You will find that regardless of whether your solution is based on vSphere or Hyper-V, most of the overarching design considerations presented
in this section are very similar and apply to both platforms. In fact, performance and capacity sizing of the StoreVirtual VSA is the same on the
two platforms. Even though the actual implementation of networking differs between those platforms, most of the general guidance applies
to both.
As mentioned in the introduction, one of the major advantages of StoreVirtual VSA is its linear scalability for capacity and performance. As a
direct result of this distributed and scale-out architecture, it is key that all storage nodes (i.e., the StoreVirtual VSA and its storage) provide the
same capacity and performance to the storage poolthe StoreVirtual Cluster. The number of StoreVirtual VSAs and their characteristics
determine the overall cluster characteristics.
Looking more closely at the storage nodes, the requirement to have similar performance and capacity per storage node translates into identical
(or at least almost identical) hardware components in the servers running the StoreVirtual VSA. Three core areas that require careful planning
are server platform, storage options, and the network, as shown in figure 2.
Server
Storage
options
Networking
Because storage resources are the bottleneck in most environments, StoreVirtual designs typically start out by selecting the right storage options
to accommodate capacity and performance needs. The number of disks and RAID controllers on a platform (along with the CPU and memory
options) dictate the server model that should host the StoreVirtual VSA and applications running in virtual machines. Because StoreVirtual uses
Ethernet network as a storage backplane and as a way to present storage via iSCSI to servers and virtual machines, the network is another
key component.
As you design a plan to build a storage pool based on StoreVirtual VSA, this section will guide you through a variety of considerations in a
four-step approach:
1. Understanding your performance, availability, and capacity requirements
Page 5
a. Which disk technologies can I choose from? Which type is recommended for which workload?
b. What RAID controller technologies can benefit the solution?
c. What are typical RAID configurations and recommended disk layout?
3. Understanding networking considerations
Note
While StoreVirtual VSA also runs on previous generations of hardware and software, most of the following sections assume that your solution
with StoreVirtual VSA is on current, or at least more recent, server technology. HPE Storage recommends HPE ProLiant Gen8 servers or later for
best performance. Validate hardware intended for StoreVirtual VSA against the appropriate hypervisor compatibility matrix.
Page 6
Generally, different applications have different performance profiles and different performance requirements, which greatly affect the StoreVirtual
VSA design. For example, an OLTP-based application typically has a small, random I/O profile (requests around 8 KB block sizes). A backup
application has large, sequential read operations (request around 1 MB block sizes). From this wide spectrum, you can see that performance
requirements for any storage system vary widely, based on how that storage system is used.
It is easy to sum up the I/O and throughput requirements per application (see table 1). When looking at an existing storage system over a longer
time period, pay special attention to the average and the 95th percentile for throughput and IOPS, because storage sizing is typically done for
sustained performance and not for spikes. In some cases, the average I/O size can indicate the type of workload (small block size: random; large
block size: sequential). Consider adding some headroom to your I/O and throughput requirements (around 20 percent for future growth). Your
total will be a deciding factor when choosing storage technologies for your deployment.
Note
When consolidating applications on a storage system, and especially when using server virtualization, all workloads land on the same back-end
storage with HPE StoreVirtual VSA. This means that a number of sequential workloads results in a random workload on the storage system
(sometimes referred to as the I/O blender).
Table 1. Example evaluation of performance characteristics in a branch office
Workload
# of instances
Total throughput
15
~2 MB/s
70
~20 MB/s
30
~10 MB/s
120
~120 MB/s
1,200
~10 MB/s
30
20
~10 MB/s
1,455
~172 MB/s
Total
Capacity requirements
Comparable to gathering performance data in an existing environment, you should look at the capacity demand of your applications. For
example, include the typical size of a virtual machine in your environment, data and log disks for your databases, and sizes of your file shares
(see table 2).
During the assessment of storage capacity in your environment, you may find underutilized file systems. In these cases, the built-in thin
provisioning technology of StoreVirtual VSA can help maintain high capacity utilization in a future StoreVirtual environment. Allocating capacity
on the cluster only when needed reduces stranded capacity in file systems and maintains a high capacity utilization on StoreVirtual.
Page 7
# of instances
Capacity
30 GB
120 GB
25 GB
75 GB
20 GB
40 GB
1,500 GB
1,500 GB
500 GB
500 GB
30
3 GB
90 GB
Total
2,325 GB
Availability requirements
All virtual storage appliances (VSAs) and software-defined storage offerings that virtualize server storage resources (that is, all VSAs, not just
StoreVirtual VSA) are unique, in that by themselves, they lack true high-availability capabilities.
A virtual appliance must incorporate some form of high availability in the application itself, because there is no truly highly available hardware
underneath a VSA running on commodity servers. High availability must consist of a copy of the data or application running on another physical
piece of hardware to prevent downtime due to the loss of a single server. Ideally, this copy of the data or application runs synchronously to
protect against downtime or data loss. Anything less than synchronous replication between servers for the data or application is not considered
being highly availablesynchronous replication with no user intervention in the event of a server failure is a critically important requirement for
storage in virtualized server deployments.
Application data should be made highly available in your future StoreVirtual VSA deployment by protecting volumes with Network RAID and
built-in high availability technology StoreVirtual VSA. You are strongly advised to protect the volumes holding virtual machines, databases, or file
shares with Network RAID to mitigate the effects of one StoreVirtual VSA becoming unavailable in the event of a server or component failure.
Network RAID requires additional capacity on your StoreVirtual Cluster to store another copy of your data, achieving high availability using
components that are not highly available.
Network RAID technology stripes and mirrors blocks of data (with the exception of a striping-only option) across all storage nodes in the cluster
to achieve high availability with StoreVirtual VSA on commodity hardware. This is a synchronous operation between two or more storage nodes
in the cluster. Maintaining multiple copies of your data allows volumes to stay online even after a storage node in the cluster goes offline. While it
might be surprising that synchronous replication is required for high availability, keep in mind that each physical server has several single points
of failure, from CPU, to backplane, to memory. Should any of those components fail, the server is down and its resources are not available to
applications. With Network RAID 10s synchronous replication, the data is still available via the synchronous copy (figure 4). Any solution using
virtual storage appliances or software to enable data services on top of servers requires this level of replication and data protection, especially in
environments where high levels of availability and data protection are required.
Page 8
Tier 0
Tier 0
Tier 0
Tier 0
Tier 1
Tier 1
Tier 1
Tier 1
You can adjust Network RAID, a per volume setting, to meet data protection and replication requirements on the fly. Choose from a variety of
Network RAID levels. For most production environments, HPE recommends using Network RAID 10, which stores two copies of the data on the
cluster. With this setting, the volumes remain online if you lose any single node in the cluster. From a capacity standpoint, you would need to take
your capacity estimate based on your assessment of the applications and workloads above and multiply it by the number of copies you want. For
Network RAID 10, this is a factor of two, because you store two copies of the data on this volume. For more information on the other Network
RAID levels and their respective use cases, see the current HPE StoreVirtual Storage User Guide. This guide and all guides pertaining to
HPE StoreVirtual are located at the HPE StoreVirtual 4000 Storage Support Center.
Note
HPE recommends Network RAID 10 for most environments with high availability requirements. Network RAID 0 (stripe, no mirroring) does not
provide any high availability and is not suitable for environments with availability requirements.
Additional remarks
Based on the performance, capacity, and availability requirements (see table 3), as well as StoreVirtual VSA licensing (4 TB, 10 TB, and 50 TB
licensing), you can estimate the number of storage nodes and their capacity. Keep in mind that StoreVirtual VSA clusters can grow over time by
adding storage to individual StoreVirtual VSA instances. Added capacity becomes available to the cluster after all storage nodes have the same
or more capacity and by adding more StoreVirtual VSAs (with the same or higher capacity) to the cluster.
Table 3. Example of summary of requirements for branch office
Workload
# of instances
Total throughput
Capacity
Performance
Page 9
Note that the StoreVirtual VSAs should not be sized for just capacity, but for performance and availability as well. For example, if a database
needs 4 TB of capacity, it is easy to achieve with two 2 TB SATA drives configured as a RAID 0 set. While that might meet the capacity
requirements, it is very likely that this configuration will not meet performance and availability requirements. As a general rule of thumb,
reaching a capacity point based on 7,200 rpm SATA or mid-line SAS drives (sometimes also called nearline SAS) will not provide the required
performance, whereas reaching a capacity point on 15K SAS drives generally provides the required performance. For example, if 20 TB of
capacity are required, it can be achieved with five 4 TB drives (not factoring in RAID overhead), providing about 400 IOPS of performance.
On the other hand, it could also be achieved with ~30 600 GB SAS drives, with a total of ~5,900 IOPS. Do not size only for capacity; consider
performance also.
Storage technologies
A huge variety of storage options is available on most server platforms and each of the technologies has its respective use case. Without trying to
oversimplify these technologies, three types of drive options are available, which make sense in combination with StoreVirtual VSA.
Solid-state drives
Solid-state drives (SSDs), based on flash technology, provide extremely high performance with very low latencies. SSDs are ideal for
applications that are I/O intensive and latency sensitive, such as OLTP databases or some datastores in a VDI implementation. While SSDs
offer tremendous performance, they offer lower capacities than SAS or mid-line SAS drives, and are usually used for specific applications and
not as general-purpose storage. For this reason, SSDs can provide the best $ per I/O (cost for performance) value.
SAS drives
Drives based on SAS technology offer higher rotational speeds, low latency, and the ability to queue commands more effectively than mid-line
SAS, which directly leads to higher performance in workloads with random I/O. This makes SAS optimal for workloads such as server
virtualization, email servers, OLTP, and other random database type applications.
MDL SAS drives
Higher storage density per dollar and footprint make mid-line SAS especially good for workloads such as repositories, file shares, and as a staging
area to back up data to tape. Performance sensitive applications such as email and OLTP are best suited for SAS drives. Performance may not be
satisfactory if those applications are on mid-line SAS drives.
When choosing the drives for StoreVirtual VSA deployment, the type of workload should be the major consideration. Table 4 summarizes the
drive option characteristics.
Table 4. Typical hard drive options in StoreVirtual VSA deployments
SSD
SAS
MDL SAS
Rotational speed
N/A
10K or 15K
7.2K
Form factor
2.5'' or 3.5''
3.5''
Average latency
200 ns
3.4 ms
11 ms
Maximum I/O
30,000 1
2301
1001
Capacities
Up to 1 TB
Up to 1.2 TB
Up to 6 TB
$/GB
$$$
$$
$ per I/O
$$
N/A
Reliability
High
High
Medium
Workloads
OLTP databases
Transactional workloads
OLTP databases
Server virtualization
File shares
Archives
DR sites
There are multiple configuration parameters that affects the performance obtained in costumer configurations. HPE recommends measuring performance with the workloads
planned for the deployment.
Page 10
StoreVirtual VSA can virtualize other storage options, such as PCIe-based flash (i.e., HPE I/O Accelerator Cards), as long as they are supported by
the server model and hypervisor.
When considering these storage options, keep in mind that StoreVirtual VSA supports two storage tiers, based on different storage technologies
with Adaptive Optimization (available with 10 TB and 50 TB licenses). Additional StoreVirtual VSA environments are covered in the Adaptive
Optimization sectionfor more information on this technology and how it can be used.
Disk RAID options
After selecting one or two drive types, decide on your RAID configuration. Use a dedicated RAID controller card such as the HPE Smart Array
Controller to configure the disk RAID. Disk RAID provides a level of protection and availability, should one or more drives fail. This allows the
server and StoreVirtual VSA on that server to continue running in the event of a drive going offline for any reason. Each type of disk RAID has its
advantages and disadvantages.
Note
Because StoreVirtual VSA relies on hardware RAID to protect its data disks, this paper ignores JBOD or RAID 0 as impractical. A drive failure in
these configurations requires a rebuild of the entire StoreVirtual VSA instance from other storage systems in the StoreVirtual Cluster when using
Network RAID.
The more common types of RAID options available on common RAID controller cards include:
RAID 5RAID 5 stripes data across drives with parity blocks distributed across all the drives in the RAID set. RAID 5 offers some level of
protection because any drive in the set can fail and data is still available. Usable capacity is very good because RAID 5 consumes only a single
drives capacity for parity. In a 10-drive RAID set, it is only 10 percent of the capacity. Read performance increases with the number of disks;
writes however require recalculation of the parity block. A writes speed depends on the RAID controller type and its firmware.
RAID 6RAID 6 is similar to RAID 5 in which it uses distributed parity to protect against downtime. However, RAID 6 uses two parity blocks
that consume two drives worth of parity. Because of the parity calculations, RAID 6 is the slowest performing of the listed RAID sets, but has an
availability greater than RAID 5 because it can lose any two drives in the RAID set and stay online. Storage efficiency is similar to RAID 5, but
RAID 6 consumes two drives capacity for parity. HPE strongly recommends RAID 6 when using drives 3 TB or larger in capacity.
RAID 10RAID 10 (often called RAID 1+0) is a RAID set comprised of multiple mirrors (RAID 1) that are then striped across the StoreVirtual
systems (RAID 0). Each drive mirrors its data to another drive, so there is a very high level of availability. A number of drives could fail before
data is unavailable, as long as they are not two drives of the same mirrored pair. Read performance is excellent because all drives are used for
reads. Write performance is also very good because half of the drives are used for writes; the other half is mirroring those writes. Usable capacity
is at 50 percent, which makes RAID 10 the preferred RAID level only for situations where availability and/or performance are the driving factor. A
RAID 10 array with two drives is often referred to as RAID 1.
Other nested RAIDOther commonly used nested RAID levels, such as RAID 50 and RAID 60, concatenate two or more RAID sets together to
create a larger storage pool. RAID 50, for instance, stripes across two or more RAID 5 sets. RAID 5+0 has similar storage efficiency and reliability
as a RAID 5 set. Performance increases because more drives in the RAID set are serving I/O.
Modeling of availability and performance testing has shown that in most environments RAID 5 and RAID 50 can offer the best combination of
performance, cost, and availability. See table 5 for a summarized overview of popular disk RAID options.
Page 11
RAID 5
RAID 6
Alternative name
Mirroring
Minimum # of drives
Useable # of drives
N/2
N-1
N-2
50%
67% to 93%
50% to 96%
No
Yes
Availability
High
Medium
High
Read performance
High
High
High
Write performance
Medium
Low
Low
Relative cost
High
Medium
Low
For optimal performance, HPE strongly recommends that you validate that the storage configuration is the same on all servers designated to run
HPE StoreVirtual VSA. Confirm that the number of spindles, disk technology, disk size, and RAID type are the same. As an example, if eight drives
are part of a RAID 5 array with 256 KB stripe size (ideal value for HPE StoreVirtual VSA and HPE Smart Array Controllers) and one logical drive
on server one, this configuration should be replicated to the other servers in the environment running StoreVirtual VSAs. The resulting cluster is
symmetric or balanced with predictable storage characteristics for reads and writes.
Note
HPE Storage sees a high adoption of RAID 5 for SSD and SAS drives with StoreVirtual VSA. RAID 6 is recommended to increase the reliability of
MDL SAS and enterprise SATA drives.
Adaptive Optimization
LeftHand OS 11.0 introduces a simple and smart sub-volume auto-tiering with Adaptive Optimization in the StoreVirtual VSA. It allows two types
of storage tiers to be used in a single StoreVirtual VSA instance, moving more frequently accessed blocks to the faster storage tier (for instance
SSDs), and keeping the infrequently accessed blocks on a tier with lower performance and potentially lower cost (for example SAS drives).
In contrast to caching solutions, the capacity of storage tiers used by Adaptive Optimization is usable capacity. This allows cost savings over
implementing an all-flash solution, while boosting performance with a percentage of faster storage for frequently accessed blocks of data.
Adaptive Optimization operates on a LeftHand OS page level (256 KB), making the technology very granular. It works transparently for
applications accessing the storage pool.
Adaptive Optimization is especially effective for workloads that have concentrated areas of frequently accessed pages, or hot data.
For example, certain database workloads might have tables that are accessed or updated frequently, creating a demand for more performance. A
virtualized desktop environment might have some datasets that have high I/O requirements during boot storms but otherwise have a moderate
performance workload during normal operation. Adaptive Optimization in the StoreVirtual VSA responds to these performance demands in near
real time, moving the blocks of data with the highest performance requirements to the faster tier to provide the performance needed, while
keeping other pages on the lower tier, to lower the cost of the solution.
When designing a solution based on StoreVirtual VSA with Adaptive Optimization, it is ideal to understand the percentage of frequently accessed
data that would be on the faster tier, compared to the rest of the data. If it is not possible to determine the ratio of hot to cold data in your
environment, a good rule of thumb is to work with 10 percent of the usable capacity of the StoreVirtual VSA (and as a result, that of the
StoreVirtual Cluster) to be faster storage (called Tier 0). The remainder should be the base performance tier (Tier 1).
Page 12
By default, StoreVirtual configures all volumes as Adaptive Optimization permitted. This allows the volumes to benefit from auto-tiering if the
cluster is Adaptive Optimization capable, as soon as the StoreVirtual nodes have two tiers configured. It is also possible to exclude volumes
from using the faster storage tier at all (Adaptive Optimization not permitted)this makes sense for volumes that either do not have
concentrated hot data or should not make use of it because of their workload characteristics (examples include file shares or archives).
Adaptive Optimization allows the administrator to define two different tiers of storage based on their requirements. Tier 0 storage could be SSD
or SAS, while Tier 1 could be SAS or MDL SAS. However, the selection may not be based only on disk technology: a user could decide that Tier 0
could be SAS disks in RAID 10 for performance and Tier 1 could be SAS disks in RAID 5 for better capacity utilization. It is important to
understand the performance characteristics of different drive configurations for different tiers to know how the system will perform.
SSD provides excellent performance, and mid-line SAS provides excellent cost per GB. However, configuring SSDs as Tier 0 and mid-line SAS as
Tier 1 might not be an ideal solution because of the performance delta between the two tiers. In this example, data served out of the SSD tier will
be very fast, but should data be accessed from the mid-line SAS tier, it will be much slower until it moves to the SSD tier. A better combination of
drives would be SSDs as Tier 0 and SAS disks as Tier 1, so Tier 1 has sufficient performance to avoid negative effects on the entire application.
More details and sizing advice for Adaptive Optimization on HPE StoreVirtual VSA is available in the Adaptive Optimization for HPE StoreVirtual
white paper.
Note
Use Adaptive Optimization on all volumes unless you do not want a volume to use Tier 0 (fastest tier) storage. It works best on volumes that
have concentrated areas of frequently accessed blocks.
Manage Adaptive Optimization (AO) on an individual volume basis (figure 5), where sub-volume data is either permitted to migrate to the faster
tier or is prevented from moving blocks to the faster tier.
Page 13
Note
Whenever possible, configure flash or battery-backed cache on the RAID controller to benefit from cache and avoid lost I/O. However, disable any
caching options on disk drives used by the StoreVirtual VSA because cache on the disks themselves is not protected during a power outage.
HPE SmartCache for direct-attached storage
HPE SmartCache is a controller-based caching solution in a DAS environment that caches the most frequently accessed data (hot data) onto
low-latency, high-performing SSDs to accelerate application workloads dynamically. HPE SmartCache is available on HPE Smart Array P-Series
controllers found in HPE ProLiant Gen8 and Gen9 servers. HPE SmartCache consists of firmware that provides the caching feature within the
Smart Array Controllers and requires a license key for activation.
The direct-attached HPE SmartCache solution includes the three elements of the HPE SmartCache architecture: HDDs serving as bulk storage,
SSDs as accelerator, and flash-backed write cache (FBWC) memory as metadata store (figure 6). For this implementation, the SmartCache control
layer resides in the firmware of the on-board Smart Array Controller of the HPE ProLiant Gen8 Server, below the operating system and driver.
This allows caching for devices connected to a single array controller.
HPE SmartCache offers flexibility in creating logical disk volumes from hard disk drives:
The accelerator or cache volume design supports any RAID configuration supported by the Smart Array Controller.
Each logical disk volume can have its own cache volume or none at all.
Create and assign cache volumes dynamically without adversely influencing applications running on the server.
Only SSDs can be used for cache volumes, and a cache volume may be assigned only to a single logical disk volume.
The HPE SmartCache solution consumes a portion of the FBWC memory module on the Smart Array Controller for metadata. To ensure
sufficient storage for accelerator and metadata, we recommend either of the following options:
1, 2, or 4 GB of FBWC memory
1 GB of metadata space for every terabyte of accelerator space
When using the HPE SmartCache solution for direct-attached storage, legacy cache is still present and operational and uses the remaining space
in the FBWC memory. When using HPE SmartCache, we recommend setting the legacy cache for 100 percent write operation. This allows
write-back support on the Smart Array Controller, which accelerates write performance. This also allows SSDs configured as HPE SmartCache
volumes to provide a much larger read cache.
HPE Smart Array Controller with SmartCache
Cache
Array
RAID 6: Host OS
Figure 6. HPE Smart Array using SSDs to provide caching for a logical disk on SAS/MDL SAS drives
HPE SmartCache operates transparently, without any dependencies on vSphere, Hyper-V, or StoreVirtual VSA. After enabling SmartCache for a
logical drive, nothing more needs to be done from a management or configuration side to benefit from an SSD cache performance boost.
The initial release of the HPE SmartCache solution supports write-through caching. When an application writes data to the disk, the Smart Array
Controller writes the data to the HDDs. If the data is also in the cache volume, the data is written to the SSD.
Page 14
Note
When deploying two distinct storage tiers with dissimilar performance characteristics, HPE recommends using StoreVirtual VSA Adaptive
Optimization.
Consider using HPE SmartCache for StoreVirtual VSA deployments without Adaptive Optimization or solutions deployed with SATA/MDL SAS
drives. Overall performance of an SATA- or MDL-based storage solution should see benefits from the high-performance, low-latency SSD cache
and HPE SmartCache. A combination of Adaptive Optimization and SmartCache is not useful.
Disk layout
One of the key best practices when setting up a StoreVirtual environment is to make sure all nodes in a cluster are the same whenever possible.
Because each storage node in a cluster actively participates in serving I/O and has an equal amount of the data, having consistent performance
between nodes allows for linear scalability and predictable performance. By having the same host configuration for all StoreVirtual VSAs in the
cluster, the performance of all StoreVirtual VSAs will be equal and performance of the cluster will be balanced across all nodes, resulting in
predictable performance and linear scalability.
After you have selected the drive types and RAID levels for capacity and performance goals, the disk must be presented to the StoreVirtual VSA.
Appliances in the HPE StoreVirtual 4000 Storage family run LeftHand OS natively with direct communication paths to the underlying hardware,
including RAID controller and drives. The LeftHand OS instance understands the type and quantity of disk drives, and can query the hardware
directly for parameters such as CPU temperature, hardware component status, etc. With the StoreVirtual VSA, the hardware layer is completely
abstracted by the hypervisor. This becomes apparent as the disk is presented to the StoreVirtual VSA.
The individual drives must be configured via the RAID controller cards configuration utility (figure 7). With an HPE Smart Array RAID controller,
this is done via the Array Configuration Utility (ACU). ACU runs under Windows and is part of Intelligent Provisioning on HPE ProLiant servers.
It is used to configure arrays, logical disks, cache settings, and other parameters on HPE Smart Array RAID Controllers. Other vendors provide
similar management tools. Consult your server manufacturer for more information.
Figure 7. Array Configuration Utility for HPE Smart Array RAID Controllers
Typically, logical disks used by the StoreVirtual VSA are on an isolated set of disks to avoid workload contingence. However, if the spindles are
just shared with a light workload such as logging by the hypervisor, the StoreVirtual VSA and its data disks could be co-located with the
hypervisor on the same set of disks. In this case, all disks are combined in one array for more performance. Because most RAID controllers,
including HPE Smart Array RAID Controllers, allow creation of more than one logical disk on one array, it is possible to configure multiple logical
disks with different RAID levels on the same set of disks. This reduces stranded capacity while still achieving the performance and availability
goals for StoreVirtual VSA data disks.
Page 15
A typical example for a single-tier configuration is shown in figure 8, with RAID 6 for the hypervisor (Host OS) and RAID 5 for the StoreVirtual
VSA data disks, which hold the user data.
Array
RAID 6: Host OS
RAID 5: StoreVirtual VSA
Figure 8. Typical RAID configuration on HPE Smart Array with one tier of storage
For configurations that use two storage tiers, the hypervisor and the StoreVirtual VSA are placed on the lower storage tier (SAS drives in an
SAS/SSD configuration). The resulting RAID configuration typically comprises two arrays and a total of three logical drives. Two of the logical
disks are used for the data disks of the StoreVirtual VSA, and one is used for the hypervisor. In the example in figure 9, the second array with
SSDs has one logical disk with RAID 5 protection.
Tier 0
Tier 1
Array 1
Array 2
Figure 9. Typical RAID configuration on HPE Smart Array with two tiers of storage for use with Adaptive Optimization
StoreVirtual VSA can consume and virtualize storage resources using two methods available in the hypervisor:
1. Virtual disk (VMDK or VHD/VHDX) on a file system (VMFS or NTFS)
2. Physical disk (raw device mapping in Physical compatibility mode or physical disk)
In concept, this is similar to how StoreVirtual 4000 Storage uses the hard drives to virtualize the disk resources and make their capacity and
performance available to the StoreVirtual Cluster.
While there is only a negligible performance difference between these two options, the amount of storage that can be presented to the
StoreVirtual VSA varies and may be an important consideration. Whether the StoreVirtual VSA base image (disk that holds LeftHand OS) and
data disks (disks that contain the data of volumes, snapshots, and SmartClones that are created on the StoreVirtual Cluster) are placed on the
same logical disks depends on the use of Adaptive Optimization and the amount of storage capacity that should be virtualized by the
StoreVirtual VSA.
To configure larger StoreVirtual VSAs in environments using a current version of vSphere and Hyper-V, it is possible to virtualize up to 50 TB
capacity in one virtual disk and present it to the StoreVirtual VSA. That virtual disk can be configured and placed on the same logical disk as the
StoreVirtual VSA. See the first option in figure 10.
Page 16
Array 1
Array 2
Array 1
Array 2
RAID 6: Host OS
RAID 6: Host OS
Figure 10. Two options: collocation of StoreVirtual VSA with its data disk (virtual disks) or with the hypervisor (virtual or physical disk)
In previous versions of vSphere and Hyper-V, which do not support virtual disks larger than 2 TB, it may not be possible to present all the
capacity using the seven slots in the StoreVirtual VSA (SCSI 1:0 to SCSI 1:6). When virtualizing larger storage resources and wanting to preserve
the ability to add more storage to the StoreVirtual VSA in the future, HPE recommends collocating the StoreVirtual VSA with the hypervisor on
the same logical disk. The StoreVirtual VSA can then use logical disks as data disks that are attached to it using a physical disk (see second
option in figure 10).
When using JBODs for additional external storage, the disks in the external enclosures are configured in one or more separate arrays and logical
disks via a RAID controller. Especially in combination with internal disks, a JBOD can be the perfect addition for environments that require more
capacity than possible with internal drives.
See the vSphere and Hyper-V sections for more information on presenting storage to the StoreVirtual VSA.
External storage arrays
With StoreVirtual VSA, it is also possible to virtualize portions of an external disk array. Even though the StoreVirtual VSA primary use case is to
turn direct-attached storage into shared, highly available storage, there are use cases where turning a third-party array into StoreVirtual Storage
makes sense. For instance, using the StoreVirtual VSA on top of an array can turn the array into a Remote Copy target in remote/branch office
deployments. The virtualized array must be listed as supported block storage (iSCSI/FC/SAS) on the VMware Hardware Compatibility List or
Microsoft Certified Products List.
Note
HPE recommends isolating the network segment used for StoreVirtual VSA and iSCSI from other networks by implementing a separate VLAN.
Page 17
Virtual switches
Switch 1
Host shares
access to dedicated
network segment
for iSCSI
Switch 2
Network interfaces
Figure 11. Host with four network interfaces
Virtual switches
Switch 1
Switch 2
Switch 3
Network interfaces
Host shares
access to dedicated
network segment
for iSCSI
Page 18
Note
On the hypervisor host, HPE recommends you follow best practices around network aggregation and network load balancing. Network bonding
of the virtual network adapters in the StoreVirtual VSA is not supported.
For solutions based on Hyper-V, it may be required to add more network connectivity to accommodate iSCSI connectivity for host, guest, and the
StoreVirtual VSA itself. Per best practices, it is recommended to use individual network interfaces for multi-path I/O to connect to iSCSI targets,
instead of teaming network interfaces. Even though solutions using teamed adapters have not reported any problems, these configurations have
not been fully qualified by either Microsoft or HPE. For more details, see the Hyper-V section in this document.
Switching infrastructure
As with solutions based on HPE StoreVirtual 4000 Storage, there are no StoreVirtual VSA certified switches or networking solutions. However,
there is general guidance around the capabilities and specifications of network switches used for StoreVirtual VSA solutions. In all cases, the key
design criteria for networks should address two areas:
Performance and low latency
The switches should have non-blocking backplane, port buffer > 256 KB per port used for iSCSI, and wire speed throughput.
Reliability and high availability
The network should be built out as a resilient and meshed network fabric.
Depending on the availability requirements for the StoreVirtual VSA solution, HPE recommends using two redundant switches at the core of the
StoreVirtual VSA network. To address the two design criteria above, HPE recommends selecting switches that support the following features:
Link aggregation
Aggregating multiple physical links to one logical interface increases available bandwidth and resiliency for the link between two devices,
e.g., combining two 1 Gb Ethernet links. While there are a variety of technologies available, a commonly used standardized protocol is the
Link Aggregation Control Protocol (LACP, IEEE 802.1AX or previously IEEE 802.3ad) between switches.
Ethernet flow control
To prevent dropped packets, enable flow control on all switch ports that handle StoreVirtual and iSCSI traffic. It is a temporary means of
congestion management by generating and honoring the so-called PAUSE frames, as defined in IEEE 802.3x standard. It is not a resolution
for switch or network oversubscription. Flow control uses the buffer of the sending device for a brief period to manage the switch device of the
receiving switch during periods of congestion.
Loop detection
Building meshed Ethernet networks typically requires the use of some form of loop detection mechanism to avoid packet storms. To avoid
negative side effects of improperly or not-yet configured switches and other devices, HPE recommends implementing a loop detection and
prevention protocol that reacts quickly to changes in the network topology, such as the Rapid Spanning Tree Protocol (IEEE 802.1w). Because
there are variants of the Spanning Tree Protocol and other loop detection protocols available, implement a protocol that works across the
entire StoreVirtual deployment.
Virtual LANs (VLANs)
Use VLANs to create multiple network segments on the same physical switch, separating one switch into multiple broadcast domains. In
StoreVirtual deployments, this can be used to separate StoreVirtual and iSCSI traffic from other services on the network. Note that StoreVirtual
VSA does not natively support VLAN tagging. When using VLANs, make sure that the physical network adapters and the virtual switches on
the hypervisor hosts are properly configured and present only the iSCSI network to the StoreVirtual VSA.
Jumbo frames (optional)
When using jumbo frames for the StoreVirtual and iSCSI network, all switch and server ports and virtual switches on the network segment
must be configured with the same frame size. Frame size may also be referred to as maximum transmission unit (MTU). Typical examples
are 4,000 or 9,000 bytes (some devices have frame size limitations). If the ports are not properly configured, the switches or devices might
discard frames. This may cause connectivity and performance issues, which are difficult to troubleshoot.
Page 19
Routing (optional)
In environments where a group of StoreVirtual VSAs is used to replicate to another StoreVirtual installation (remote office replicating to a
data center or vice versa) or if the StoreVirtual VSA needs to communicate with other services that are not on the same network or subnet,
routing can bridge the gap between these separate networks. Even though routing is often taken care of by dedicated equipment, some
switches offer dynamic and static routing capabilities to facilitate the communication between network segments and networks.
Note
Not all of the above listed features need to be supported or implemented on the switches in your solution. However, HPE recommends
implementing link aggregation, Ethernet flow control, loop detection and prevention, and VLANs in most StoreVirtual VSA environments.
For more detailed information on StoreVirtual network best practices, consult the HPE StoreVirtual Storage network design considerations and
best practices guide, which also lists some recommended switches from HPE Networking.
Multi-site configurations
A single logical cluster can span different physical locations, as shown in figure 13. With Network RAID 10, two copies of the data are in each
volume in the cluster. If the cluster is physically split between two sites such that each site has a complete copy of the data, which means the
storage is now site protected. A complete site or location could go offline for any reason, and a complete, live copy of the data on the StoreVirtual
VSA is still accessible by hosts and applications. When combined with hypervisor features such as VMwares High Availability, vSphere Metro
Storage Cluster, or Windows Failover Cluster for Hyper-V VMs, this is a powerful and highly available solution.
High-availability features require shared storage so that the hypervisor servers can access the virtual machines. If the storage (and therefore the
virtual machines) is unavailable, these features cannot work as designed. With the multi-site cluster configuration of the StoreVirtual VSA, one set
of storage and physical servers can be at one location and another set of storage and physical servers can be at a second location. Should either
site go down, the storage remains online with no user intervention, and the hypervisor can access the virtual machines files, pause briefly, and
then resume the work of the virtual machines.
When planning for a multi-site implementation, take advantage of the networking requirements that will help ensure that performance and
availability meet expectations. At a high level, you should plan for 50 MB/s of bandwidth for each storage system per site when using 1 Gb
Ethernet. For instance, if each site contains five storage systems, then you need 250 MB/s of bandwidth.
In this case, it translates into two Gigabit Ethernet links or more. In environments with 10 Gb Ethernet, plan for 250 MB/s for each storage node
per site. Network links should have low latency (2 ms or less is recommended). For more details around planning and implementing multi-site
configurations, refer to the HPE StoreVirtual Storage multi-site configuration guide.
Site B
StoreVirtual
cluster
Hypervisor
cluster
Site A
Figure 13. Multi-site configuration: stretched StoreVirtual Cluster that matches a stretched compute cluster configuration
Page 20
Remote Copy
For remote and branch offices, StoreVirtual VSA can replicate to other StoreVirtual VSAs or StoreVirtual 4000 Storage. StoreVirtual VSA
supports any transparent network technology as long as there is a TCP/IP link: VPNs over the Internet and WAN accelerators with compression
are fully supported.
Note
The hypervisor must fully support servers and storage using HPE StoreVirtual VSA. To be sure that you are running a supported configuration,
check to see if the server model is on the VMware Compatibility Guide or in the Microsoft Windows Server Catalog (see links listed in section
Resources).
External storage
Another driving factor when choosing the server platform is the option to connect external storage shelves as additional direct-attached storage.
An example could be that if you landed on a mixed configuration of SAS and MDL SAS, based on performance and capacity requirements, you
might want to use eight SFF SAS drives in a 1U chassis and attach an additional disk shelf with 12 LFF MDL SAS drives to the server.
HPE offers two disk shelves with 12 Gb SAS connectivity, which are commonly found in larger StoreVirtual VSA installations.
HPE D3600 12 Gb SAS enclosures with up to 12 LFF drives
HPE D3700 12 Gb SAS enclosures with up to 25 SFF drives
HPE D3000 Enclosures 12 Gb SAS enables a higher and faster data transfer rate doubling the current transfer rate of 6 Gb solutions providing
crucial bandwidth. These disk shelves can be populated with SSDs, SAS, MDL SAS, and Enterprise SATA drives to match your performance and
availability requirements for this StoreVirtual solution.
Choosing a server platform
The server and its RAID controller primarily need to support the defined amount of drives. The HPE ProLiant Server portfolio offers a variety of
storage controllers and form factors to support your network and compute requirements.
Figure 14 shows the HPE ProLiant Server family from high-density rack-mount servers (DL series), to towers (ML series), to blades (BL series).
For most StoreVirtual VSA installations, the new HPE ProLiant Gen9 rack portfolio, with flexible choices and versatile design, is the ideal choice
because of their options for CPUs, memory, network, and storage. Testing for this paper was done on Gen8; however, the solution is fully
supported on ProLiant Gen9 servers.
HPE ProLiant DL360p Gen8 with support for 2 Intel Xeon CPUs, up to 768 GB and up to 10 SFF or 4 LFF drives
HPE ProLiant DL380p Gen8 with support for 2 Intel Xeon CPUs, up to 768 GB and up to 25 SFF or 12 LFF drives
Page 21
Note
When configuring CPU and memory, refer to your server manufacturers documentation on optimal configurations to achieve best performance.
For both models, the on-board HPE Smart Array Controller supports all internal drive options offered with this server. An additional HPE Smart
Array RAID Controller is required to add external storage shelves.
System resources
For StoreVirtual VSAs running on LeftHand OS 12.0 or later, HPE recommends reserving two virtual CPUs with 2,000 MHz and a quantity of
virtual machine memory based on the amount of server storage capacity dedicated to StoreVirtual VSA (see table 6). This is especially important
for planning purposes; the installers for HPE StoreVirtual VSA automatically configures the amount of virtual memory according to table 6.
Table 6. Virtual memory requirements for StoreVirtual VSA
StoreVirtual VSA capacity
(total of all storage devices)
<= 1 TB
1 - <= 4 TB
4 - <= 10 TB
10 - <= 20 TB
12
20 - <= 30 TB
12
17
30 - <= 40 TB
15
21
40 - <= 50 TB
18
26
Refer to the HPE StoreVirtual Storage VSA Installation and Configuration Guide to determine the most recent hardware requirements for the
version of StoreVirtual VSA that you are going to deploy. Adding more virtual machine resources than specified in the HPE StoreVirtual Storage
VSA Installation and Configuration Guide will not increase performance; an exception to this rule is deployments of StoreVirtual VSA with large
flash tiers (or all flash). Adding up to four vCPUs in these environments helps to deliver more I/O per second from these tiers.
Hardware monitoring
When using StoreVirtual VSA, HPE recommends using hardware monitoring capabilities as provided by the server vendor. In the event of
hardware issues, especially disk failures, the server monitoring software notifies the administrator about failure and prefailure conditions. For
HPE ProLiant servers, agents for Microsoft Windows and VMware vSphere enable active monitoring. In the custom HPE Image of VMware
vSphere Hypervisor (VMware ESXi Server), these agents come preinstalled. HPE ProLiant servers also come with Systems Insight Manager
that allows monitoring of the system components. Even more advanced is HPE Insight Remote Support that can be configured to automatically
open support incidents in cases of component failures.
For more information on HPE System Management Homepage, visit h18013.www1.hp.com/products/servers/management/agents.
Page 22
Note
Upgrades to server hardware or host operating systems are outside the scope of this document.
HPE StoreVirtual Multi-path Extension Module for vSphere
HPE StoreVirtual Multi-path Extension Module (MEM) for vSphere is a new feature with LeftHand OS 12.0. MEM provides and manages
additional iSCSI sessions between ESX hosts and the HPE VSA nodes that may increase application performance. Figure 15 is an example of a
vSphere host running MEM with two iSCSI paths to a single StoreVirtual volume in a three-node StoreVirtual Cluster. This storage volume has
eight paths between host and cluster.
MEM is not installable via the CMC. The administrator seamlessly installs the MEM component on the host using a vSphere VIB or via VMware
Update Manager, using an Offline Bundle available from the HPE StoreVirtual downloads page.
Review the section titled Using StoreVirtual Multipathing for VMware contained in the document HPE StoreVirtual Storage Multipathing
Deployment Guide p/n AX696-96377 located on the StoreVirtual product manual page.
Page 23
Space Reclamation
LeftHand OS 12.0 delivers Space Reclamation, a new StoreVirtual Storage feature that recovers space released by vSphere 5 and later or
Windows Server 2012 and later. Space Reclamation uses the T10 SCSI UNMAP command issued by a storage host to signal a storage array
that a block on a volume is no longer needed, for example, after files are deleted by user. HPE StoreVirtual reclaims the freed capacity that, in
thin-provisioned environments, results in a reduction of provisioned space on the array and an increase in available storage capacity.
VMware disables UNMAP operations by default in vSphere 5.5 (KB 2014849). VMware provides an esxcli command that manually reclaims
unused storage blocks on a thin-provisioned VMFS datastore. The following VMware esxcli command initiates the UNMAP operation
(KB 2057513).
# esxcli storage vmfs unmap
After a management group is running LeftHand OS 12.0, right-click on a management group in the CMC and select Enable Space Reclamation
to enable the feature. When the LeftHand OS enables the feature for the first time, it runs a memory size check on all storage systems in the
management group. If any of the storage systems have insufficient memory (e.g., StoreVirtual VSAs or certain HPE P4000 G2 systems), the
CMC informs you of the issue. If the CMC does not flag any issues, all the storage systems in the management group are properly equipped for
Space Reclamation. View Space Reclamation status from the CMC cluster details view shown in figure 16. Once enabled, Space Reclamation
cannot be disabled.
The HPE document titled HPE StoreVirtual Storage User Guide p/n AX696-96359 contains a good section on managing storage space using
Space Reclamation. The Ongoing Capacity Management section details requirements and maintenance of Space Reclamation.
Increase storage capacity
Increase storage capacity by adding StoreVirtual systems to an existing cluster. Additional storage systems may also increase cluster
performance.
Licensing
StoreVirtual VSA functions during an evaluation period of 60 days without a license. During the 60-day evaluation period, StoreVirtual is fully
functional and all features are available for trial purposes. As the evaluation period progresses, alarm indications (example shown in figure 17) in
the CMC remind the administrator of the amount of time available in the remaining period. Beginning with LeftHand OS 12.0, HPE StoreVirtual
enforces the licensing agreement so at the expiration of the period, specific StoreVirtual features become unavailable to the cluster.
Note
Volume data and snapshots will become inaccessible to the cluster but not deleted from the array. Licensing the cluster restores access to
volumes and snapshots.
Shut down applications and then back up application data before clusters become unavailable.
Page 24
Figure 17. Example of registration alarm in CMC as 60-day trial VSA license period winds down
The StoreVirtual VSA licensing interaction with softwaresupport.hpe.com/contracts-licensing, HPEs software licensing portal for managing
entitlements, requires three pieces of information to complete the process for each installed VSA.
EON or certificate IDSent to the customer after purchasing a key for each VSA.
Email addressHPE licensing requires a customer email address to help register new keys and to manage existing VSA keys and certificate
IDs. HPE distributes new license keys using this email address.
MAC address of VSAVSA MAC addresses are system generated by the hypervisor during VSA deployment. The administrator can also
manually configure MAC addresses to the VSA. Either way, the address is available for viewing from the CMC after VSA discovery.
The easiest way to install keys is through the CMC Import License function, which licenses all VSAs in a cluster at one time. Each license file
arrives as an attachment via separate emails. Archive the attachments for safekeeping and save copies of the attachments in a folder that is
accessible from the CMC. Use the minimum naming format when saving the copies used by the Import License function. Figure 18 demonstrates
one context sensitive help message to correct an incorrectly formatted file name. The minimum file name string consists of numeric and upper
case only characters of the MAC address, followed by file type dat. Separate the MAC address by periods into character pairs. Note the additional
lowercase x characters. This represents optional free form text and is useful for identifying or managing key usage. File name examples include:
00.15.5D.E2.91.06.dat
Clusterxyz00.15.5D.E2.91.06.dat
Cloud2_RowA_00.15.5D.E2.91.06.dat
Figure 18. Context-sensitive help example of incorrectly formatted license key file name
Page 25
Note
HPE recommends using optional free form text to help identify and manage key usage.
The state of entitlement for a given storage system is viewable through the CMC and validates that the network MAC address (feature key) for
this VSA is correctly associated with a license key from softwaresupport.hpe.com/contracts-licensing and the StoreVirtual features set (figure 19).
Page 26
ProLiant DL family
CPU
Memory
160 GB
Storage media
Network interfaces
4 x 10 Gb Ethernet ports
21,464 IOPS
1,830 MB/s
Hypervisor
VMware
ESXi 5.5 U2
DL380p Gen8
Note
Refer to the following white paper for detailed review of StoreVirtual Adaptive Optimization, h20195.www2.hpe.com/v2/getpdf.aspx/
4aa4-9000enw.pdf.
SD boot: using Secure Digital (SD) card technology for booting HPE ProLiant servers white paper
Tier 0
SD card
SSD
Storage used
by host
Page 27
Tier 1
HDD
Storage virtualized by
StoreVirtual VSA
Figure 20. Booting vSphere from an SD card to reserve all storage resources for StoreVirtual VSA
As mentioned in the Disk layout section, the StoreVirtual VSA does not directly see the individual drives, LUNs, or volumes for external arrays.
Instead, the storage is first presented to the hypervisor, either through the RAID controller card or from the external storage array. Decisions
about disk RAID protection, number of drives, and type of drives are made and managed at a layer below the hypervisor. The LUNs or volumes
from the storage devices are then presented to vSphere for use by StoreVirtual VSA.
Presenting storage to StoreVirtual VSA
In vSphere, storage is presented in two ways: virtual machine disk (VMDK) or Raw Device Mapping (RDM) (see figure 21). VMDKs reside in a
VMware Virtual Machine File System (VMFS) datastore and are storage collection of files that reside on a VMFS datastore. Format a storage
resource with VMFS before placing VMDK files on this datastore. An RDM, on the other hand, is essentially a pass-through of raw disk from the
storage controller to a VM, without an elaborate abstraction layer.
Traditionally, a VMDK datastore has provided many advantages for virtual machines. VMDKs allow advanced functionality such as vMotion, DRS,
and virtual machine snapshots. VMDKs are limited in size to 2 TB per VMDK in VMware vSphere 5.1 and previous versions, but this limit
increases to 62 TB in vSphere 5.5.
VMDK
Pass-through
disk
VMFS
Raw
Virtual
hard disk
Raw
RAID-protected
storage
StoreVirtual VSA VMs are not like traditional virtual machines, in that you do not use vMotion or DRS or take snapshots of the StoreVirtual VSA
and its data disks. As compared to other virtual machines running Linux, Windows, or other guest operating systems, the StoreVirtual VSAs are
very static. Because StoreVirtual VSA does not use the advanced features of vSphere, using RDMs is a viable option for the StoreVirtual VSA data
disks. There is no real performance difference between the two formats. However, RDMs allow larger storage allocations in versions of vSphere
earlier than 5.5.
Page 28
The following best practices apply to VMFS datastores for the StoreVirtual VSA:
VMDKs can be only 2 TB with vSphere 5.1, and only 7 VMDKs can be presented to the StoreVirtual VSA. For total capacity higher than 14 TB
per StoreVirtual VSA, upgrade to vSphere 5.5 or use RDMs.
Regardless of using VMDKs or RDMs, make sure each of the VMDKs or RDMs in a set is equal in capacity and configuration for each
StoreVirtual VSA to ensure predictable performance.
Note
Depending on the amount of storage presented to vSphere, and consequently to the StoreVirtual VSA as VMDK, it may be necessary to increase
the heap size in the host configuration for vSphere 5.1 and earlier (see VMware KB 1004424 for more detail).
Networking configuration on vSphere
Enabling iSCSI connectivity via the iSCSI Software Initiator in vSphere requires specific network configuration. Starting with vSphere 4, you can
use multiple adapters with the iSCSI Software Initiator. Each physical network interface (vmnic) used for iSCSI has its own VMkernel with exactly
one active adapter. These VMkernel instances are then bound to the iSCSI Software Initiator (vmhba) that uses all underlying network interfaces.
The ideal networking configuration for iSCSI depends on the number of Gigabit network connections available to a vSphere Server. The most
common configurations, four or six ports, are outlined here for reference. HPE does not recommend using fewer than four ports, because
adapters would need to be shared for iSCSI and other services. Figure 22 shows a vSphere screenshot of a four-port configuration.
The vSphere servers with four Ethernet network ports are capable of performing better by separating management and virtual machine traffic
from iSCSI and vMotion traffic.
In this configuration, typically two virtual switches are configured with two network interfaces. If possible, one port from two separate network
adapters should be used. For example, if using two on-board network interfaces and a dual port PCI adapter, use port 0 from the on board and
port 0 from the adapter on one virtual switch. Then, use port 1 from the on board and port 1 from the adapter on the other virtual switch. This
provides protection from some bus or card failures.
Page 29
The vSphere servers with four 1 Gb Ethernet network ports should be configured as follows:
The first virtual switch should have:
A virtual machine network
A VMkernel port for management and vMotion
The second virtual switch should have:
A virtual machine network for StoreVirtual VSA and guests that need direct access to iSCSI
Two VMkernel instances for iSCSI (individually mapped to one vmnic interface) for iSCSI connectivity of the vSphere host
Note
Use at least two 1 Gb Ethernet network adapters for iSCSI connectivity for performance and failover.
The vSphere servers with six 1 Gb Ethernet network interfaces are ideal for delivering performance with the iSCSI Software Initiator (figure 23).
The improvement over four ports is achieved by segmenting vMotion traffic and iSCSI traffic, so they do not have to share bandwidth. Both iSCSI
and vMotion will perform better in this environment. Figure 23 shows a screenshot of this configuration.
Figure 23. Typical configuration with six or more physical network interfaces
In this configuration, typically three virtual switches are configured with two network interfaces per virtual switch. If possible, one port from
separate Gigabit adapters should be used on each virtual switch to prevent some bus or card failures from affecting an entire virtual switch.
The first virtual switch should have:
A virtual machine network
A VMkernel port for management
Page 30
Note
For best performance, VMkernel networks used for iSCSI should be separate from any other VMkernel networks (used for management, vMotion,
and Fault Tolerance) or networks for virtual machines.
Figure 24. vSphere Web Client with HPE OneView for VMware vCenter plug-in
Page 31
Figure 25. HPE StoreVirtual VSA 2014 and StoreVirtual FOM Installer for VMware vSphere
After successful deployment using the installation wizard, the StoreVirtual VSA instance will be available on the designated network. Make sure
that the StoreVirtual VSA is listed as an available system in the HPE StoreVirtual Centralized Management Console.
When all StoreVirtual VSAs have been deployed on the network, the next step is to create a management group, with the first cluster and
volumes provisioned to hosts. HPE recommends volumes on the cluster be protected with Network RAID 10 for best performance and high
availability. For more information on working with management groups and clusters, refer to the chapters in the current version of the
HPE StoreVirtual Storage User Guide. The example shown in figure 26 shows three VSAs in a group labeled GrpESXVSA.
Using the network interfaces of the vSphere host, the iSCSI Software Initiator can now access the volumes on the StoreVirtual VSA cluster.
Page 32
Figure 26. StoreVirtual VSA running on vSphere listed in a group in the Centralized Management Console
The vSphere clusters require shared storage to make virtual machines highly available on the cluster and to move virtual machines transparently
between hosts using vMotion. Figure 27 shows how a vSphere cluster uses iSCSI storage presented by the StoreVirtual Cluster, running on the
same hardware and without external shared arrays.
StoreVirtual cluster
Shared storage
on StoreVirtual VSA
is accessed using iSCSI
Page 33
Advanced management
StoreVirtual administrators may choose to manage HPE StoreVirtual Storage from the vSphere Web Client in addition to using HPE StoreVirtual
Centralized Management Console. HPE OneView for VMware vCenter (HPE OV4VC) adds the HPE Management feature set and new Actions
dropdown items to the vSphere Web Client. HPE OV4VC lets the administrator deploy VSAs, create StoreVirtual Clusters, and provision new
datastores from the vSphere Web Client. Without using the Centralized Management Console and manually mounting and formatting new
volumes in vCenter, this plug-in automates datastore provisioning and decommissioning. It also provides an overview of available and
provisioned storage, and space savings with thin provisioning.
For information on HPE OneView for VMware vCenter, including add-in download or documentation, refer to HPE OneView for VMware vCenter.
HPE OneView for VMware vCenter
Figures 28 and 29 highlight StoreVirtual management tasks available from the vSphere Web Client. The HPE OneView for VMware vCenter
plug-in provides robust integration for fully automated vSphere cluster deployment, monitoring, and streamlined firmware updates.
Page 34
Provision HPE StoreVirtual Storage to datastores with the intuitive five-step wizard. Figure 29 shows one of the five steps that leads the
administrator or IT generalist through the creation of three new datastores.
For configuration details on HPE OneView for VMware vCenter, refer to the current version of the HPE OneView for VMware vCenter User Guide
in the HPE Support Center.
Page 35
ProLiant DL family
CPU
Memory
32 GB
Storage media
8 x 300 GB HDD
1.6 TB
RAID controller
Network interfaces
4 x 1 GB Ethernet ports
Random performance
2,378 IOPS
Sequential performance
716 MB/s
Hypervisor
DL360p Gen8
Note
These numbers can be affected by many configuration and application variables, such as the number of volumes on the StoreVirtual and queue
depth (number of requests) to keep the drives busy.
VHDX
Pass-through
disk
NTFS
Raw
Page 36
Virtual
hard disk
Raw
RAID-protected
storage
Because there are only marginal performance differences between the two options, choose your storage based on supported capacity and your
storage management preferences. For the majority of customers, virtual machine disk (VHDX) files are typically easier to manage.
Network configuration on Hyper-V
As shown in figures 31 and 32, the ideal networking configuration for StoreVirtual VSA in Hyper-V environments typically requires four or more
network interfaces. Service level planning should drive the requirement for the number of network interfaces contained in the solution. The
number is determined by the level of resiliency and the desired storage connectivity by host and virtual machines (guest operating systems). All
interfaces used for connectivity among StoreVirtual VSAs or to the StoreVirtual VSA cluster connect to the same network segment. Regardless of
the available network bandwidth, the StoreVirtual VSA will always report Unavailable in the CMC TCP status tab. The available bandwidth value
can be found from the Windows Network and Sharing Center view.
Network interfaces
Hyper-V
Network 1
Hyper-V
Network 2
Team 1
Team 2
iSCSI
Initiator
Page 37
Network interfaces
Network 1
Network 2
Network 3
Team 1
Team 2
Team 3
iSCSI
Initiator
Figure 32. Expanded networking configuration for StoreVirtual VSA on Hyper-V with iSCSI initiator in the guest OS
Page 38
Figure 33. SCVMM and HPE OneView for Microsoft System Center Add-in
Further information on HPE StoreFront Manager for Microsoft System Center is available at h20564.www2.hpe.com/hpsc/doc/public/display?
docId=emr_na-c04484805.
Traditional VSA deployment
HPE StoreVirtual VSA Installer for Microsoft Hyper-V helps to install and configure the StoreVirtual VSA on the local server. This means that the
installer needs to be executed on the Hyper-V hosts individually.
Like the StoreVirtual VSA Installer for vSphere, the Installer for Hyper-V presents a wizard to configure the virtual machine and its networking
and storage configuration (including multiple storage tiers for Adaptive Optimization). To match the similar hardware configuration of the
Hyper-V hosts, the configuration of each StoreVirtual VSA will be very similar, if not identical, except for its host name and IP address.
When the wizard completes, before the actual installation starts, the installer presents a summary page of the settings. Review this page
carefully and compare it with your planning guides. As shown in figure 34, the summary lists all details of the to-be-installed StoreVirtual VSA,
including network and storage configuration. If parameters need adjustments, return to the wizard and make the desired changes to the
installation options.
After deploying the StoreVirtual VSA, most settings can be altered using Hyper-V Manager (connected network) or HPE StoreVirtual Centralized
Management Console (tier assignment, IP address).
Page 39
Figure 34. Installation summary in the HPE StoreVirtual VSA Installer for Hyper-V
After the successful deployment using the installation wizard, the StoreVirtual VSA instance is available on the designated network. Make sure
the CMC lists the StoreVirtual VSA as an available system. Run Find Systems from the CMC to discover available StoreVirtual systems.
After deploying all StoreVirtual VSAs on the network, the next step is to create a management group and the first cluster. Protect volumes on the
cluster with Network RAID 10 for best performance and high availability. For more information on working with management groups and
clusters, refer to the current version of the HPE StoreVirtual Storage User Guide.
Using the Hyper-V Server network interfaces, the iSCSI Initiator can now access the volumes on the StoreVirtual VSA cluster. To create highly
available virtual machines on these volumes, connect to volumes, add them to a Windows Failover Cluster, and create highly available virtual
machines using Failover Cluster Manager.
Note that StoreVirtual VSAs are not visible in Failover Cluster Manager and are not marked as highly available because they stay local to
each machine.
Windows Failover Cluster requires shared storage to make virtual machines highly available on the cluster and to move virtual machines
transparently between hosts using Live Migration. Figure 35 shows how hosts in the Windows Failover Cluster use iSCSI storage presented by
the StoreVirtual Cluster running on the same hardware and without external shared storage arrays.
Page 40
StoreVirtual Cluster
Shared storage
on StoreVirtual VSA
is accessed using iSCSI
Advanced management
For streamlined workflows in Hyper-V environments, the storage presented by StoreVirtual VSA can be managed from within the Microsoft
System Center Virtual Machine Manager 2012 SP1 (SCVMM). Use SCVMM to provision new storage for a virtual machine or rapid deployment
of a virtual machine template with SAN copy. HPE StoreVirtual does not require a proxy agent. Instead, SCVMM communicates directly with the
StoreVirtual Cluster.
After you set up the StoreVirtual Cluster, you can add the cluster virtual IP address as a storage provider into Virtual Machine Manager
(see figure 36). When adding StoreVirtual, make sure you use SSL encryption for the communication, default port (TCP 5989), and protocol
SMI-S CIM-XML. Storage resources need to be associated with a storage classification (administrator-defined) such as bronze, silver, or gold.
After adding the provider successfully, all clusters in your StoreVirtual management group are available in the classification and pools list.
New volumes can be provisioned from the available storage pools, used by new virtual machines, and presented to Hyper-V hosts that are
managed by Virtual Machine Manager via iSCSI.
Page 41
For virtual machine templates that reside on StoreVirtual volumes and are available in Virtual Machine Manager Library, you can also rapidly
provision multiple instances from this template. These instances are spun up from SmartClones that are created on the basis of the StoreVirtual
volume that contains the virtual machine template.
For more information and details on using System Center Virtual Machine Manager, refer to Microsoft System Center main page.
Summary
StoreVirtual VSA provides a robust, feature-rich primary storage platform that also provides the flexibility to adapt to your changing business
needs. Its scale-out, distributed architecture makes it easy to grow capacity and performance linearly. With its all-inclusive software package, it is
easy to accommodate availability and business continuity requirements in both the data center and smart remote offices.
When designing a software-defined storage solution based on StoreVirtual VSA, always be sure to understand the performance and capacity
requirements of the environment, especially during peak activity. After the storage requirements are understood, a solution can be designed with
the right disk drives, server platforms, and network infrastructure.
By following the best practices and design considerations in this document, your software-defined storage implementation can provide excellent
performance, availability, and value.
Page 42
7. Why is it not possible to install my StoreVirtual VSA using the softwares base VMDK and VHDX file?
The HPE StoreVirtual VSA Installer greatly simplifies the installation of the StoreVirtual VSA by offering an easy-to-use interface that streamlines
the deployment of multiple VSAs across multiple physical servers. The StoreVirtual VSA Installer for vSphere and Hyper-V are the preferred
method of installing the StoreVirtual VSA because they offer more flexibility and customization and save time.
For vSphere installations, the StoreVirtual VSA is also available as an OVF template. This method is going to be deprecated in one of the next
releases of LeftHand OS.
8. How can I troubleshoot the hardware used by StoreVirtual VSA?
One of the strengths of the StoreVirtual VSA is that it is supported to run on any hardware that is supported by the hypervisor. Because of
the sheer number of hardware options available, the best way to get support for a hardware issue is to contact the manufacturer of the
hardware directly.
Description
HPE ProLiant servers
HPE ProLiant DL380p Gen8 25 SFF HPE DL380p Gen8 Intel Xeon E5-2660 v2 (2.2 GHz/10-core/25 MB/95 W)
FIO Processor Kit 160 GB RAM (665554-B21)
12
HPE 200GB 6G SATA Mainstream Endurance SFF 2.5-in SC Enterprise Mainstream SSD
30
HPE 58x0AF Back (power side) to Front (port side) Airflow Fan Tray (JC682A)
Miscellaneous hardware
12
HPE X240 40G QSFP+ to QSFP+ 1m Direct Attach Copper Cable (JG326A)
Page 43
Software
Table 10. Solution-tested software
HPE StoreVirtual VSA 2014 and StoreVirtual FOM Installer for VMware vSphere
HPE StoreVirtual Centralized Management Console
HPE StoreVirtual Multi-Path Extension Module (MEM) for vSphere
VMware vSphere ESXi 5.5 U2 (HPE build)
Bill of material for 3-node DL360p Gen8 VSA with Microsoft Hyper-V
This paper used the hardware components listed in table 10 for the Windows-based remote office solution.
Hardware
Table 11. Hardware bill of materials
Quantity
Description
HPE ProLiant servers
HPE ProLiant DL360p Gen8 E5-2650 2P 32GB-R P420i SFF 460W PS Performance Server 646904-001
24
Software
Table 12. Tested software
HPE StoreVirtual VSA for Hyper-V with LeftHand OS 12.0
HPE StoreVirtual Centralized Management Console 12.0
Microsoft Windows Server 2012 R2
Page 44
However, when scaling out very large numbers of StoreVirtual VSAs repeatedly (prestaging of remote office deployments or in a data center), it
is desirable to deploy StoreVirtual VSA in a more automated fashion. The installer can deploy StoreVirtual VSA according to an answer file for
the installation wizard. The XML-based deployment manifest method shown in the code example below installs StoreVirtual VSA as follows:
On one standalone vSphere host
On multiple vCenter-managed vSphere hosts
To generate an XML deployment manifest, unzip the entire HPE StoreVirtual VSA 2014 and StoreVirtual FOM Installer for VMware vSphere
(TA688-10528.exe) package (self-extracting archive) and execute VSACLI.EXE from a command-line prompt. The text-based wizard guides
you through all StoreVirtual VSA settings, including networking, storage, and tier definition. Save all answers to an XML file. Use the XML file to
start the deployment at a later time, or use as a template for a larger, scripted installation of StoreVirtual VSAs.
The following code is an example of a StoreVirtual deployment manifest for two VSAs on two ESXi hosts.
Page 45
</Network_Mappings>
<Disk_Mappings type=VMDK">
<Disk name=HardDisk1" Datastore=Datastore_esx2lh1" size=5"
spaceReclaim=True" tier=Tier 0" />
<Disk name=HardDisk2" Datastore=Datastore_esx2lh2" size=5"
spaceReclaim=True" tier=Tier 1" />
</Disk_Mappings>
</VSA>
</HostSystem>
</VCInfo>
</Zero2VSA>
The example above deploys two StoreVirtual VSAs (networking configured with DHCP) with 5 GB of storage on two vSphere hosts
(10.0.108.251, 10.0.108.252) managed by one vCenter instance (10.0.108.250). To add more vSphere hosts in this example, simply copy the
<HostSystem/> section and modify the content to represent the configuration of the other VSAs, until all VSAs being installed have the correct
deployment settings in the XML file.
Note
HPE does not recommend using DHCP for VSA IP addresses unless addresses are static DHCP.
To start the deployment, call VSACLI.EXE with the answer file as sole argument. See the following example.
VSACLI.EXE vsa-deployment-manifest.xml
This command immediately starts the installation of the StoreVirtual VSA instances as specified in the deployment manifest.
Note
When deploying multiple StoreVirtual VSAs to standalone vSphere hosts, use one deployment manifest per vSphere host and use Windows
command line or PowerShell scripting to serialize (or parallelize) the installation. Avoid concurrent deployments of StoreVirtual VSAs with the
same name to one vSphere host.
Resources
HPE StoreVirtual VSA product page
HPE StoreVirtual VSA QuickSpecs
HPE StoreVirtual support manuals and user guides
HPE LeftHand OS Version 12.0 Release Notes
HPE StoreVirtual downloads
HPE StoreVirtual Compatibility Matrix (on HPE SPOCK)
HPE OneView for VMware vCenter
HPE OneView for Microsoft System Center
Adaptive Optimization for HPE StoreVirtual
HPE StoreVirtual Storage network design considerations and best practices
HPE SmartCache technology white paper
HPE Systems Insight Manager 7.4.1 QuickSpecs
Microsoft Windows Server Catalog
VMware Compatibility Guide (hardware compatibility list)
Learn more at
hpe.com/storage/storevirtual
Copyright 2013, 20152016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change
without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard
Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Intel Xeon is a trademark of Intel Corporation in the U.S. and other countries. Microsoft, Windows, and Windows Server are either
registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Linux is the registered
trademark of Linus Torvalds in the U.S. and other countries. SD is a trademark or registered trademark of SD-3C in the United States
and other countries or both. VMware vSphere, VMware vSphere 5.1, VMware vSphere Hypervisor, VMware vSphere Web Client,
VMware ESX, VMware vCenter, and VMware ESXi are registered trademarks or trademarks of VMware, Inc. in the United States
and/or other jurisdictions.
4AA4-8440ENW, August 2016, Rev. 3